I thought I'd detail the build process of a new server for the lab here. While I'll spec the hardware, this thread should be mostly about getting the software as desired. It's just nice to know what we're working with here.
I have a Core II quad-core at 2.83 mhz, and 8 gigs of DDR 2 memory floating around from a mobo (Asus P5QL/EPU) that had gone bad, it had that known issue with the intel chipset sata controller failing - two boards worth. Being a cheapskate (kind of) I wanted to reuse those, having already used up the box they were originally in for something else, so I had to buy a few more parts.
I got an ASUS P5QL-VM DO mobo. This one is a lot simpler and lower power than that bleeding edge thing that failed. No big copper fluid cooling pipes for the chipset etc required here. It was according to PCLand, about the last core II board on earth that had 4 memory slots. It has onboard video, but I wanted to be able to run dual monitors, so I added an MSI video card to it, M8400 series (Nvidia chipset which seems to be the best under linux).
I bought an Intel SSD drive to be root, 160gb (more than required, but what was in stock). For other directories that get a lot of writing, I got a Seagate Barracuda 7200 rpm sata drive, 2 tb (also more than needed, or so I hope).
That's all the hardware for now, I put it in a cheap generic mid tower case with a 350w power supply. The case has a bunch of USB and audio jacks prominent on the front, which I like and which will come in handy around the fusor for data acquisition should I remove the one right on its rack (because eventually, some stray HV will toast that one).
I of course used another machine to download and burn an iso for ubuntu 10.04-2 64 bit. I have slightly mixed feelings about 64 bit linux at this point, which I'm also running on the *really* fast machine. This is largely due to adobe flash and video drivers not being ready for primetime, but other slight issues also -- session memory gets lost in the terminals and other little annoyances as well. Since this is "fresh" and easy to back off from, I may revert back to 32 bit, which works perfectly, and with PAE, allows the use of all that ram -- but only in 4gb or less chunks per process. That would seem good enough for almost anything, but you never know with realtime data aq.
First move was to partition the magnetic disk into 3 partitions, to be mounted over /tmp, /var, and /home. I used 20 gb for tmp, 40 for /var, and the rest for home, which also holds a network shared directory I always call "pub" and make "promiscuous" to be able to move things around the network, and for cross machine backups. Doing this, and mounting the SSD over root means that most of the writing that goes on during normal use gets pushed onto the magnetic drive, not the SSD, so it won't wear out. I may also disable the journalling on that drive to reduce writes to it and keep the speed high.
This is a very fast machine. It's only barely noticeably slower than the fastest one here, which is a 3.5 ghz i7 with 12 gigs ram, SSD and mag disk setup (same way more or less) and with a super vid card (getting into the big fan sort, lotsa CUDA cores). I'm hoping that despite latency due to what we used to call the "pentium pause" it will keep up with some serious high rate data aq. I don't think the fact that it has only half the virtual cores will be a big deal, but we'll see. Linux does a real decent job of splitting up multiple processes on multiple cores, so they load up nicely in most usage situations, particularly if you plan for that when organizing the code package you'll be running and needing great performance out of.
The "pentium pause" seems to be something that happens to highly loaded intel cpus -- they auto throttle down without any warning when they get hot, and seemingly just go off on some demented errand of their own for milliseconds on end. This is why PC's stink for data aq with hard realtime deadlines, and why we build devices to buffer data a little bit on the way into the PC to overcome that issue.
The install went slick, as usual. Don't even think about installing ubuntu on a machine that isn't hooked to the internet - it's a waste of time, and it'll just require more time later to get the latest-greatest everything when you do finally get it connected. And, once you get it basically in, you should invoke update manager (in the administration menu) until you've got all the possible updates - they are all good, but sometimes you have to do this more than once, since things have an order they have to get done in. Then you'll check that again after installing any other software. Once set up nice, this is automatic from then on and works pretty sweetly compared to "those other opsys" that go off and eat bandwidth when *they* please, not caring if they are sucking bw you need (both network and cpu) for the actual job at hand, and then refusing to let you shut down until they're done "phoning home".
First thing to do -- get those darned window buttons back on the right side. I don't have iOS envy like the later output of the ubuntu team seems to have, I just want a opsys to lay there and no eye-candy etc -- just load my programs, no fancy wiggly windows etc...I did the procedure linked here to get that done.
Of course I'll be wanting a bunch of other software on this machine, like MySQL, VirtualBox, a buncha perl modules to support my own coding, and so on. Procedures for getting them all in, and setup "right" will follow on this thread along with any tips I (re) discover on the way. File sharing has become all too "interesting" on ubuntu. It used to be you'd just install SAMBA, use all the defaults, maybe add a share in the .conf file, and you were done. It's not as simple now, and it's a long story how to get things really working as well as they used to. They've added some "simplifications" to the default that assume you've got some always-on other machine to be a domain server, which might not be the case here - I do a lot of peer to peer stuff, with no always-on machine wasting power just to make it possible to share between two other machines. Thus, I consider their new simplified improvement that requires that a bug. Further, the new Samba.conf file just ain't right even if you do put in the full version, and we had to tweak an older one to get thigns really right, and copy it around.
I should make a list to save me time next time. Oh, I AM making a list!