At 11/6/03 17:18, you wrote:
Dear Pascal Dornier:
I'm a Ph.D. student in Computer Science at Yale University. One of our ongoing project is to use formal systems to verify the low level code. We decide to work on BIOS code because bios is the lowest level computer software. It is kind of self-contained and it provides services for higher level software. It would be interesting to have a verified bios to by used by others. As the first step, I'm reading the code of your tinybios. My question is, as an experienced bios developer, what kind of properties of the bios program are you interested in? In other words, what kind of bugs do you think appear most frequently in BIOS programs? If I have an ideal tool that can verify any properties you want (e.g. memory safety, type safety, deadlock free, etc.), what kind of properties do you want to verify?
You response would be highly appreciated!
I'm afraid a formal tool probably wouldn't do much for me. Typical BIOS problems
today are at a different level:
- timing problems of peripherals (e.g. CD-ROM, PS/2 mouse controller)
- temperamental chipset settings (did I mention the documentation usually sucks ?)
etc.
Memory safety -> BIOS does not do much memory access.
Type safety -> all assembly code, no types !
But you are certainly welcome to use my code as a test case.
Best regards,
Pascal Dornier
1. Runtime Checks
a. Checked build of the kernel. Microsoft ships a heavily asserted build of the system for driver writers to use while testing
b. Driver Verifier. This is built into every system and allows runtime checking and low resource simulation.
c. Call Usage Verifier. This is relatively new, and is a compile time option while building drivers to add runtime checks to validate kernel
calls
2. Static Analysis
a. Warning level 4. Microsoft has fixed things in recent years so the compiler can build drivers with full checking. This will find a number
of basic errors.
b. PreFast - With the latest DDK Microsoft has a added a static analysis tool for drivers. This tool was from Microsoft Research and has
had academic papers published about it.
c. ChkInf - This tool checks the installation files for validity
d. PcLint - This has been mentioned in other posts. It is a powerful tool that allows extension for specific runtimes.
3. Code Coverage and testing
a. Both Bullseye and NuMega have code coverage tools to see what code is executed.
b. The DDK has a number of tools to exercise drivers
c. WHQL has a large suite of tools to test drivers, the only complaint is the source are not present and the tools have lousy interfaces
for reproducing problems.
Actually compared to most ***x OS's there is a strong tool base for driver writers. The problem is more to get developers to use them. Few
people use the checked build, this includes inside of Microsoft. Several groups inside Microsoft ship drivers that do not pass the verifier (even
though WHQL requires this).
Almost no one use the static analysis tools. I have pushed people towards them and heard the complaints these are useless warnings to fix. My
only success has been in some cases where the tool found a bug that was driving the developer crazy. Of course I also know of a major firm I
reported a bug to based on compiling with W4, they thanked me for the fix since it had been crashing customers machines for two years, but explained
they could not spend the time to fix the W4 complaints in the rest of their products since they were too busy chasing bugs!
I am not convinced this is a good research project since there is a lot of work already in the area.
Don Burn (MVP, Windows DDK)
Windows 2k/XP/2k3 Filesystem and Driver Consulting
This really goes a long way. This the point, not just Yale, but any respected university should think. If someone is after postulating a theory,
lemma, colorary then should not beat around this bush. On the otherhand, if someone is really want to get into reality, then this is "The point" from
Gary should be preached every morning after getting up, well of course, if there is time to sleep last night. And here Reality means the unexpected,
and in that world unexpected means expected, since only the unexpected follows the theory of expectation with a bias toward unexpectedness.
It has nothing to do with Yale or the OP. But the sad fact is lot of the thesis are just not that valuable from its immenent practical point. For
this reason, big think tanks have both theoritical and applied reasearch and battery of female ants ( female ants are the worker type brings food
home...) And, I've seen both world, it is my practical experience, I was told by my advisor why you leaving when 3/4th of your thesis is already
done, U only need to implement couple more graph algorithms for probabilistic parallel
programming on (semi)regular graph, often used by network/telephone company... I knew by heart that it would be like venturing
for goodwill, if anything at all.
A programmer is a programmer is a programmer ad infinitum. It does not require a high profile degree, but basic computer sc principles are
invaluable. The principles are tools, just like classical, and discrete math is a tool for physicist ...
Well said, Gary. This is point one should think first... "what it is worth", "how is it going to make a difference" etc etc ....
The best scientist around the world are extreemly practical, I suppose !!!
Three comments:
1) In the real world, the attempts to solve the problem of unreliable drivers have tended to focus on 3rd party verification techniques. The reasons for this are many fold, but basically all devolve down to the stability of the system being only as good as the weakest link. The kind of people that would even think of running a formal correctness checker on their code are not the ones generating the problem drivers, by and large.
As a result, static code analysis isn't likely to improve the situation in the real world by very much. It might still be a nice tool for those of us that *are* careful.
2) The kinds of issues that most commonly cause bugs (other than carelessness) are due to the complexity of the environment that the driver lives in. The largest component of this on any modern system is the operating system itself. And most of the really tough issues aren't "bugs" so much as they are misunderstandings about the definition of that interface, or changes in the definition over time.
These issues defy static analysis to a large degree because the static analysis would have to understand the interface (and adapt to the rapidly changing interface) better than even some of the most experienced driver writers.
As a result, I would say that the largest factor that can lead to improved driver quality is simplification, rationalization, and stabilization of driver interfaces on the part of OS vendors.
Originally, I was considering saying that open-source and/or improved documentation was the best way to solve this, but upon further reflection... Again: the problem is largely the inexperienced and/or careless driver writers. They aren't going to a) be able to and b) bother to read and fully understand the documentation/source, no matter how
good/available it is.
As a result, I feel that research time might be better spent understanding what makes an OS interface rational, simple, and stable.
3) There are many conflicting opinions about how to properly deal with environmental unpredictability. Even something as simple as checking for/handling NULL pointers returned from memory allocations has significant controversy about whether it's a "good idea" or what the best way to handle it is. A tool that can't deal with these differences of opinion is not likely to be very useful in the real world.
Hi Xinyu,
There are some tools already out there do the verifications you mentioned already, so in my opinion it would be wise to take a look at those tools
first to comeup with a list of types of errors you would like to cover ....As an example, though might not be exhaustive -
There is a driver verifier that comes with DDK, it does lot of checking/verifying.
There are krnl debugger extension, that does deadlock analysis, and that too comes with windbg, but it does not do prevention analysis if I'm correct !!!
There are tools from Compuware(Numega) that does code coverage, memory leak analysis...
There is no automatic garbage collector, so there is no analysis there as of now, but in the future it seems like a logical direction.
So in essence, my recommendation is to study those tools first to get an idea of what is/are already provided. Finally there are lots of stuff up in
MS's that are being cooked and not quite in the market for their primetime, so it might be a good idea to get connected with them, so that you dont come
with something that is well researched and bitten to death by MS research/dev people...
-prokash
Hi,
As a driver developer, I am very interesting in your project. Meanwhile, I would like to provide some information to you. Hope the following be helpful.
1. Memory leak
2. Wrong IRQL
3. DeadLock
4.synchronization
5. Others
If you feel convenience, can you send your survey result to me?
keep in touch
Li Zhihua
Hi, Feng Xinyu,
Besides being a kernel developer at Compuware/Numega, I'm also a professor of computer science. So, I'm interested in your research ! If you want, you
can check out my web page at http://www.rivier.edu/faculty/amoreira/web.
I find that bugs related to shared multiprocessor systems are the hardest ones to pinpoint. There's a pretty decent tool out there, it's called
"TotalView", that gives a shot at exposing some of the inner workings of an SMP system and at presenting a coherent vision of the system's
multiprocessing operation. If you look it up with Google you will find their web site without a problem.
The other problem of course is that hardware is non-deterministic, and it can - and does - fail. The issue isn't so much to diagnose frequent
problems, it's really the odd infrequent ones that eat up much resources. For example, I once had a graphics chip that misbehaved whenever there were
two back-to-back Interrupt Acknowledge cycles on the I/O bus, and that was a pig to find. So, maybe you want to consider techniques that not only work in
C or C++, but also HDL or VHDL ? Mind you, debugging Verilog code is way harder than debugging C or C++, and the parallel nature of hardware
definition languages makes it even harder.
Memory safety is also important, but here there's some pretty decent tools out there. If however you can find a new way of doing it, for example,
something that can be used without writing to that memory, I bet a lot of people will raise their eyebrows. In fact, one of the issues we have with
debugging today is that most debugging techniques are destructive: they require the debugging software to plant "hooks" inside the code, so that the
debugger can seize control at key points and do things such as breakpoints, single stepping, or even debug output. However, this is a jolly pain, we run
into multiprocessor synchronization issues all the time because planting those hooks is a delicate operation and very error prone, we keep being
surprised by multiprocessor synchronization problems very often !
However, this is a bit far from formal verification methods, unless of course you want to be able to verify the health of a debugger's hooking
mechanism.
Another issue we bump into over and again is the issue of an API call failing. For example, you try to allocate memory and there isn't enough, and
the driver is ill-prepared to handle that failure. A formal method of evaluating driver correctness under stress conditions would be an invaluable
tool: you run the tool on the driver, and it pinpoints potential failures due to API calls failing. For example, sometimes we have to run stress test
on a piece of code, and it takes hours: so, we leave it running overnight,come back the next morning, and we find that the code failed about 4:30am
and somehow we failed to catch much of the failure's original state. If a verification method can expose these problems, it'll be an instant success !
In that line, anything that contributes to "unintermittentlizing" an intermittent problem will also be a great contribution, many of the problems
we get hammered on by our customers tend to be of the intermittent sort, which makes them very hard to duplicate. Right now, we must duplicate every
error we find before we can fix it - but if a formal verification tool could expose risky code and forecast those for us, again, that would be a great
contribution to the state of the art.
If you want, I'll be here, you can email me privately at amoreira@ieee.org
if you want to talk further about this issue. Feel free to contact me !
Alberto.
...
As for the formal verification tool, it already exists. It is named PC-lint and analyses source code. It can't find logical errors as invalid lock
hierarchy but finds most coding bugs and forces developer to handle every possible error condition. Sometimes it is very hard to understand what it is
complaining about (because code seems innocent) but when distinguished, it is mostly right and there is a bug to fix. DDK contains prefast tool which
should work similar way; haven't tried it, yet. It would be nice if lint-ready source code is a condition for successful WHQL test ;-)
Best regards,
Michal Vodicka
STMicroelectronics Design and Application s.r.o.
PreFast
http://www.microsoft.com/whdc/hwdev/driver/PREfast.mspx
Driver Verifier:
Driver Verifier always monitors for the following forbidden actions:http://support.microsoft.com/default.aspx?scid=kb;en-us;244617