[vox-tech] Must a 300 microsecond delay keep the CPU busy?

Micah J. Cowan micah at cowan.name
Tue Apr 4 12:57:50 PDT 2006


On Tue, Apr 04, 2006 at 11:52:52AM -0700, Chris Jenks wrote:
> 
>    Dear Group,
> 
>    I'm writing a C program on my Debian system to read from an interface 
> board through the parallel port. I need to wait at least 300 microseconds 
> before reading from the next channel, to give the A/D converter on the 
> board time to stabilize, but I don't want to wait much longer (e.g., 10 
> milliseconds) because it will make the program too slow. The delay 
> functions (usleep, nanosleep...) only provide delays down to 10-30 
> milliseconds, despite their name, because they apparently yield the CPU to 
> other tasks with every call. The best solution I've found it to read (or 
> write) to a port (e.g., 0x80), which takes one microsecond. By doing this 
> 300 times, I get something close to the wanted delay, plus a little 
> because of time sharing, but it is good enough. The only thing I don't 
> like is that my process takes about 97% of the CPU, even though it spends 
> almost all its time waiting. The CPU is a fanless 386, and it runs pretty 
> hot at 97% usage. Is there an elegant solution to this, or should I look 
> for a CPU fan? I would like to leave this a time-sharing system.

Why must you read from a port? ...as long as you're not giving up system
resources anyway, couldn't you just call gettimeofday() repeatedly?
...or does the system clock not support a sufficient resolution?

Not a great solution, but might be better than doing other I/O...

Too bad about the excessive delay from usleep() and nanosleep()...
they're allowed to go over, but... 10 milliseconds? That sucks.

-- 
Micah J. Cowan
Programmer, musician, typesetting enthusiast, gamer...
http://micah.cowan.name/


More information about the vox-tech mailing list