[vox-if] Installfest Hardware, etc.

javilk at mall-net.com javilk at mall-net.com
Tue Mar 18 19:22:27 PDT 2008

> I'm taking further discussion off the list --- please email me if you 
> want to be included...

    Oops!  I thought it was just the two of us...

> >      Knowing what you are doing is a hinderance.  It is when you don't 
> > know what you are doing, that you find shortcuts...
> > 
> >      But then, just about everything I do is a research project.
> Online forums help you in volume with the trivial operations.  However, 
> they are useless under complex situations.

     As I well know. I use to be on SVLUG back in the 90's, when they 
were something.

> >       Perhaps if I knew which conflaguration files to ignite...
> I could show you at the installfest


> >      I have the option on the BIOS.
> Great, then it's easy.

     if you know how.

> >     c... c.. Compile???!!!  I mean, simple things I compile.  10k, 50k 
> > source files.  But the whole dinking thing with non ECC, not even Parity 
> > checked RAM???  That's on the edge of insane.  
> Huh? Recompiling the kernel is trivial.  Really, it is.  Don't be scared.

    Part is knowing how, part is trusting your hardware.  I get hardware 
errors on anything I really use.  I often use it enough that I hit the 
error rates on hard drives.  Sometimes the delays trigger tag depth 
queue cuts, and we have to reboot the machines.  But it's true that some 
of our hardware is a tad old.  Those new drives I bought aren't new 
anymore.  I still remember when one of the drives I was using had a 
buffer error. Took a lot of work to find that. Everyone was saying just 
reinstall the system, just reinstall the system!  Then I did a complex 
sort, and found records overlaid with parts of other records. Drive 
hardware, not system!  My father's most important rule was: prove the 
diagnosis before you try to repair the problem.  Time and again, I see 
people trying to fix the wrong problem.

> >     Several machines share drives for low I/O (usually final 
> > destination) files, served to the web by a low front end machine. Took 
> > two hours to write.  And if I need to repurpose a machine, no problem!
> > 
> >       As for LVM, No!  I rig things so onle one process is writing to or 
> > reading from ONE file per drive.  Dramatic speed improvements that way, 
> > as the head movements are minimal.  Hardware level optimization, simply 
> > by the number of drives per machine; though you do have to write the 
> > application yourself...
> Although ACID is generally a good policy, manufacturers and industry 

     Not familiar with the term ACID.

> design things to be used specifically.  Usually, letting these things do 
> there thing is a better policy.  You are assuming contiguous files for 
> instance, and that the file directory listings are reasonably placed on 
> the disk.  You are assuming that the caching mechanism of the drive is 
> more favorable in your model then in some other one.  You are assuming 
> that the cost of latency is minimal to some speed increase.  You are 
> assuming that the kernel stack to the ioctl level isn't better when 
> parallelized.  Additionally, ACID, which is what you are essentially 
> describing, was designed for reliability, more then speed.

      I've run tests for what I do.  And yes, I make sure the files are 
contiguous, etc. for those projects.  For what I do... But I certainly 
don't know much compared to what there is to know!  

> >      Yes!  But I'd rather plunk down my dollars to let someone else give 
> > me a STANDARD, tested, Known Good system. Then, if something goes 
> > strange, and does so ONLY on my machine, I know it must be my software, 
> > my hardware, not some screw up in the operating system.
> The Cathedral and Bazaar by Eric S. Raymond may be an interesting read. 
>   The highly integrated systems do not necessarily function better then 
> the hacked together ones.  My Dell Poweredge only goes down with the 
> power, and it runs FreeBSD 4.x - something it certainly didn't come with 
> --- on processors it didn't come with, with RAM it didn't come with, on 
> drives it didn't come with.  I've had uptimes on it for about 1 year --- 
> and only because I tend to move around a lot.

    Only a year?  I've seen longer. BUT when it fails, I want to know 
whether it's my software, the OS, or the hardware.  (Usually it's that I 
just three fingered the wrong keyboard!)  

> To put it another way, if I was to want a sandwich, given the time, I 
> would rather buy the ingredients myself to be assured of their quality 
> then entrust a deli.

    Absolutely!  But that assumes you are not trying to make it in an RV 
someone is driving on a bumpy dirt road.   I'm a practical guy.  I get 
sucked into those RV sandwich projects, because the guys I'm working 
with suddenly notice they are starving.  Then they all look at me...  
Well... give me the hardware, and I'll give you the software.

> >       I've fought that war before on lots of systems... Nothing like a 
> > Known Good Standard! (I've seen enough OS level bugs, etc... standards 
> > make bug replication easier, and more conclusive as to cause if you 
> > can't replicate it on another machine.)
> That model of software development, specing standards before devel, is 
> on it's way out.  It really is.  Just look at extreme or agile 

    Wrong reference. I am referring to the standard Known Good 
distribution operating system; not programming standards.  If my tool 
fails, I want to know if it was the RAM, hard drive, CPU, or a bug in 
the tool. Reload the tool, do a Diff on it... try the same thing on 
another machine... etc.

     As to specifications before development, rapid prototyping helps 
make sure you and the client are talking about the same thing. All too 
often, we aren't. But the prototype also inspires good ideas; and often, 
that's even more important.

> programming.  People in the field of software development life cycles, 
> rhythms and methodologies are now actively recommending different models 
> then the traditional "let's build a bridge" one.

> >      None of them offer the kind of facilities that BASH offers.  I can 
> > whip up a custom app in Bash in a matter of hours to day that youd take 
> > weeks in Perl/Icon/Python, etc. and months in C.
> I accept this challenge...

    Of course, it depends on the project.  Till I know what the client 
wants, I don't know what I have to use, or do.  I spend most of my time 
thinking about what and how. On one week assignments, as were common on 
one long project at IBM I was involved in, I'd think till Wednesday 
night or Thursday morning, turning in the results some time Friday.  
The rest of the guys sometime had to work till late Sunday (with good 
overtime!) When I asked my boss why I didn't get those tough projects; 
He just looked at me.  Only after we'd both left that company, did he 
tell me he was always giving me the toughest assignments.  My thing on 
normal consulting projects is to spend a few weeks sitting there 
watching the thing come together in my mind before I start writing code. 
Some managers tolerate that, others get all antsy and require a lot of 
intermediate paperwork which slows that even further.

     But I've not had those kinds of projects in a while. We're not 
trying to do enough impossible things in Silicon Valley anymore. 

> >      BASH delivers the most computation (using standard Linux utilities) 
> > per line of code and per hour of human programming; far more than any 
> > other language when used by a skilled programmer.  
> I and Stephen Wolfram would have to disagree.  Let's try for nonlinear 
> optimization and least square minimizing.  Then we can move on to FFT's 
> and things that utilize transcendental and imaginary numbers.  Move a 
> bit over to abstract algebra and I bet that BASH is starting to hurt a 
> little.

     Sure, if you invest the time, you can get far better throughput.  
But if you have to toss up a prototype, if you measure the productivity 
of YOUR time, not the computer's productivity, then BASH is usually the 
first and fastest cut.  At least for what I've often done.  (Or REXX, or 
whatever scripting stuff the client has.)

> > >      What I want, is to be able to say:
> > 
> > a=de
> > b=fg
> > defg=testing
> > 
> > and get echo {{a}{b}} to say "testing".
> This is basic reflection.  Most modern languages, in fact, even 
> Javascript, supports this.  I really suggest you do a survey of what 
> modern scripting has to offer.

    Most modern languages???  Interesting!  I have not used Java or 
Javascript.  This is a good reason to look at it!

> >      That's the kind of crazy stuff I do. I've programmed in over 35 
> > different computer languages, been a Software Linguist on an project for 
> > IBM, etc. Don't claim to be good at any language... but I get things 
> > done.  And do it faster and simpler than most anyone else.
> I'm guessing you've worked a bit with JCL, right?

     Yep. Punch cards, too.  (The proper use of punch cards is not 
programming, it's for checklists.) My first personal computer was an IBM 
360, model 65. OS MVT with Hasp.  The machine was ALL MINE from Saturday 
evening to 6am monday morning!  Big 2250 vector display with the light 
pen and "gas pedal", etc.  PL/1 and CPS, etc.  I hate to think how slow 
it actually was, and how little work we could actually get it to perform 
compared to what we do today.  A 2 megabyte digital picture?  How many 
months to resize one of today's digital pictures on an IBM 360???  
Nevermind the electric bill...!!!

     We're in computational heaven today!!!  I have a TERABYTE and a 
half on one machine!!! I remember renting disk packs for the 360. I 
think it was a couple meg for over a hundred dollars a month. Then the 
deliveryman dropped it lightly on a corner, and the next day, it cost 
$13,000 to repair the drive.  I let CDC and IBM decide who's insurance 
was to pay for that fiasco.

     And now that we are in computational heaven, the machines still run 
too slow for what we want to do with them.  But for the moment, I have 
enouhg drive space.  

> >      If you can do that kind of crazy recursive substitution, you can do 
> > fantastic rule based AI. I've done things like that. That is power that 
> > blows all compiled and many interpreted languages away when used by 
> > someone like myself.  Even blows prolog away.  And as I said, it can be 
> > used to generate other code.
> Here is how to do it in awk
> ls -ltr | awk '{for(x=0;x<NR;x++) print $x;}'

    Yes, but Awk is not BASH.  I am trying to execute BASH to execute 
strings of utilities that do the things that I typically end up doing.  
(And I sometimes question whether I'm sane doing it in BASH...)

     Not to say there are not other problems trying to use BASH for 
some thing!!!  

> x is substituted for $1 $2 $3 $4 etc.  It's basic reflection, in about 
> 30 characters of awk --- you could easily emit this too if you sprinkle 
> a little quote here or there.

    Hmmm... Have to think about it; but then I'm transforming a program 
based on a priori definitions, not transforming a statement based on 
internal conditions of the program.

> >      The first cut of anything I'll do in BASH, if I can.  Next, we try 
> > making portions more efficient.  And finally, I'll use something like C, 
> > or even assembler, for the parts that really need speed.
> Yeah, going back to above --- I think that Intel's Compiler, icc, has 
> probably a better track record, being internally developed by the 
> company that made the processor, at knowing what will optimize the code 
> as opposed to what just looks like optimization.
      Yes.  But no one has tossed me enough cash to make me do that in 
the last few years...  Not that I don't want to do that, just that the 
turn around time and costs do not warrant it at present.

> I've thrown a few optimization flags on icc with basic c code and been 
> on par or faster with my assembly writing counterparts simply because, 
> perhaps, for example, using the EFLAGS register in some sequential 
> manner is faster then using the same over and over - for some obscure 
> reason based on the architecture of the chip.  You and I don't know 
> these details, but Intel does.

     Right.  But a part of the problem is the notation of assembler 
itself. My favorite was SMAL/80, with it's a=m(hl)  type notation and a 
darned good macro pass.  I forget if it was Kernighan or Ritchie, 
probably Ritchie, who looked at some of my macro based optimization in 
their SMAL/80 and said he didn't know you could do that.  Anyway, he was 
impressed enough to see me in New York several decades ago when I was 
going to the ACM meetings.  I am surprised their notation did not catch 
on with Intel and others.

> >      What else I am looking for, is a good, simple editor that has macro 
> > capabilities, and the ability for the macros to kick off into BASH to 
> > run utilities.  I had a version of CRISP way back that could do it under 
> > Interactive Unix (pre Slowaris) and DOS...  but I lost the source.  (No, 
> > I don't want to learn Emacs.  My fingers do word star.)
> Wikipedia has a nice survey of text editors, many supporting what you 
> talk about.

      The requirement didn't come up till yesterday.  I'll have to do 
some software archaeology next month... so I need some more capable 

> >      Anyway, it sounds to me that if I could set up an ethernet boot, I 
> > might be able to get something installed.  (Gentoo would be 
> > interesting... but again, I don't trust this hardware for compiles.)
> Compilers have come a long way in the past 10 years - realy.  EGCS of 
> 1998 cannot in anyway be compared to modern GCC.

     Good.  I've been working in the back waters on some web stuff, 
massaging a lot of data for a dwindling number of clients.

     Trying to convince this client we should be using embedded Linux 
instead of starting from scratch with a PIC. Not that I know anything 
about embedded Linux; but it would give us more flexibility for future 
expansion, which I know we are going to have!

     Tempted to come to the next install fest with the truck, wheel in 
the whole archaic equipment rack of machines and let you guys tell me 
how to do it so much better than the way I cobbled it together.

     So much to learn, so little time...

-JVV- (javilk at Mall-Net.com)
John V. Vilkaitis

More information about the vox-if mailing list