[vox-tech] Kernel not seeing all my RAM
Bill Broadley
bill at cse.ucdavis.edu
Tue Feb 3 23:49:29 PST 2009
Chanoch (Ken) Bloom wrote:
> On Tue, 2009-02-03 at 10:28 -0800, Bill Broadley wrote:
>> Chanoch (Ken) Bloom wrote:
>>> I'm upgrading an x86
>> Not x86-64?
>
> There's two machines with this problem. One is really an x86. The other
> is an amd64, but it's running a totally 32-bit system since it shares
> home directories by NFS with the first machine (and a couple other
> 32-bit only machines).
There is no problem use a 64 bit NFS server and a 32 bit NFS client.
> With regard to the amd64, I'd like to have it run a 64-bit kernel with a
> 32-bit userspace
Why? Things like mplayer (with a bunch of codecs), flash, java plugin, dvd
playback, video encoding, etc "just work". I've not seen any problems with
google earth 5.0, picasa, firefox (with plugins), etc.
>, but the machines are running Gentoo, and I don't know
> how to do that in Gentoo.
Unless recently I always installed 32 bit machines for anything with any sort
of desktop duty. I didn't want a chroot, didn't want to main two
environments, didn't want alot of extra hassle... not to mention 4 or more GB
of ram is pretty pricey. These days 4GB ram cost $50 or so, things are much
more mature.
> (I'm looking at switching these machines to Debian in the long run, and
> I'll probably start testing that on one machine after lenny is released
> in a couple weeks, so in the long run I may not need to solve the
> kernel/userspace thing in Gentoo.)
I've heard of folks running hybrid with debian, personally I'm running ubuntu
and it keeps me happy.... I was 100% redhat er, umm, for almost a decade.
> So you're saying that a 32-bit kernel will map devices in into the 4GB
> address space in such a way that they hide physical RAM, regardless of
> whether you're using CONFIG_HIGHMEM4G or CONFIG_HIGHMEM64G, and that a
> 64-bit kernel will map devices differently. Is this right?
Er. You are seeing 3GB or so of ram because some devices like video cards
take memory in the bottom 4GB by default, some PCI devices can only address
memory in the bottom 4GB. There are various approaches to handling this,
linux supports bounce buffers so an I/O device can write to the lower 4GB,
then the kernel copies it to wherever user space expects it (which can be
above or below 4GB). PAE is definitely required, as Cam mentioned I'd try the
HIGHMEM4G first. If that doesn't work see if the BIOS mentions anything, then
if that doesn't work try the 64G setting.
Unless of course you just want to upgrade to a 64 bit kernel.
> The definition is probably "Machine Register Size". Pointers, ints, and
Which registers? Int? Floating point? SSE?
> addressable memory should go along for the ride because it's convenient
Addressable memory isn't a particularly good one, after all the last 5
generations or so of intel chips could address 64GB, but only had 32 bit
pointers (4GB worth).
> to do it that way, but note:
> * an int on amd64 gcc is 32 bits. It's a long that's 64 bits.
Indeed... although that's a language thing more than anything else. GCC also
supports 128 bit floats (long double), but that doesn't really say anything
about the hardware.
> * The DOS memory model (*sticks out tongue*) used 16-bit pointers
> on 16-bit processors, but addressed 1MB of RAM by the use of a
> segment register to choose which "paragraph" of memory pointers
> would refer to.
Right, much like PAE. Instead of a segment register they play tricks with the
page tables. Hopefully 64 bit pointers will prevent any new hacks in there
area. Lets so most system max out at around 8 Dimms. 2^64 bytes/8 = 2^61
bytes. So until something bigger than 2048 Petabyte dimms become common we
should be safe. Maybe then ZFS will makes sense ;-)
More information about the vox-tech
mailing list