Windows NT vs. Unix: A design comparison
The article compares Windows NT and Unix, highlighting NT's modern design principles, hybrid kernel, Hardware Abstraction Layer, object-oriented approach, and early integration of threads, contrasting with Unix's incremental evolution.
Read original articleThe article by Julio Merino compares the design philosophies and implementations of Windows NT and Unix, highlighting the differences that stem from their respective histories and development goals. Windows NT, conceived in 1989 and released in 1993, was designed with modern principles such as portability, multiprocessing support, and compatibility with various legacy systems. In contrast, Unix, which began development in 1969, focused on simplicity and was retrofitted with features like portability and multitasking over time. NT employs a hybrid kernel architecture, integrating elements of both monolithic and microkernel designs, while Unix typically uses a monolithic kernel. A significant aspect of NT's design is its Hardware Abstraction Layer (HAL), which allows it to run on multiple architectures, unlike many Unix systems that are tied to specific hardware. Additionally, NT's object-oriented approach provides centralized access control and unified event handling, which contrasts with Unix's more fragmented handling of processes and resources. The article also discusses how NT's design incorporates threads from the outset, while Unix systems had to adapt to this concept later. Overall, NT's design reflects a more modern approach, leveraging lessons learned from earlier operating systems, while Unix's evolution has been more incremental.
- Windows NT was designed with modern principles, including portability and multiprocessing support.
- NT features a hybrid kernel architecture, while Unix typically uses a monolithic kernel.
- The Hardware Abstraction Layer (HAL) in NT allows it to run on various architectures.
- NT employs an object-oriented approach for centralized access control and event handling.
- Threads were integrated into NT from the beginning, unlike in many Unix systems.
Related
Windows NT for Power Macintosh
The GitHub repository provides information on Windows NT development for Power Macintosh, including ARC firmware, drivers, software compatibility, installation guides, known issues, dual-boot specifics, firmware compilation, and credits.
I Like NetBSD, or Why Portability Matters
NetBSD, founded in 1993, prioritizes portability and simplicity following Unix philosophy. It supports various hardware platforms, emphasizes code quality, and fosters a community valuing system longevity and older tech. NetBSD promotes sustainability and efficiency in software design, offering a cozy, minimal setup for exploration and learning.
Approach used to convert the Hotmail web server farm from Unix to Windows (2002)
Microsoft's conversion of Hotmail from UNIX to Windows 2000 aims to improve performance and hardware utilization. Initial results show better throughput, though Windows' administrative model poses challenges.
Technology history: Where Unix came from
Unix, developed in 1969 by Ken Thompson, evolved from an experiment at Bell Labs. Its design emphasized simplicity, influencing modern Linux systems and establishing foundational commands still in use today.
I Like NetBSD, or Why Portability Matters
NetBSD, released in 1993, emphasizes portability and modular code, supporting various hardware architectures. It offers a user-friendly experience, aligning with sustainability and customization values, despite lower performance compared to other BSDs.
- Many commenters highlight the historical influence of VMS on Windows NT, emphasizing the design principles derived from it.
- There is a debate regarding the effectiveness and efficiency of the NT kernel compared to Unix, with some arguing that NT's architecture is overly complex.
- Several users express concerns about the "bloat" in Windows, which they feel undermines the advantages of its design.
- Comments reflect a divide in user experience, with some praising Windows for its user-friendly features while others advocate for Unix's simplicity and reliability.
- Security and stability issues in Windows are frequently mentioned, contrasting with Unix's perceived robustness in server environments.
It seems like NT was designed to fix a lot of the problems with drivers in Windows 3.x/95/98. Drivers in those OSs came from 3rd party vendors and couldn't be trusted to not crash the system. So ample facilities were created to help the user out, such as "Safe Mode" , fallback drivers, and a graphics driver interface that disables itself if it crashes too many times (yes really).
Compare that to any Unix. Historic, AT&T Unix, Solaris, Linux, BSD 4.x, Net/Free/OpenBSD, any research Unix being taught at universities, or any of the new crop of Unix-likes such as Redox. Universally, the philosophy there is that drivers are high-reliability components vetted and likely written by the kernel devs.
(Windows also has a stable driver API and I have yet to see a Unix with that, but that's another tangent)
The processes section should be expanded upon. The NT kernel doesn't execute processes, it executes _threads_. Threads can be created in a few milliseconds where as noted, processes are heavy weight; essentially the opposite of Unicies. This is a big distinction.
io_uring would be the first true non-blocking async I/O implementation on Unicies.
It should also be noted that while NT as a product is much newer than Unicies, it's history is rooted in VMS fundamentals thanks to it's core team of ex-Digital devs lead by David Cutler. This pulls back that 'feature history' by a decade or more. Still not as old as UNIX, but "old enough", one could argue.
[0] https://stackoverflow.com/questions/8768083/difference-betwe...
On the philosophical side, one thing to consider is that NT is in effect a third system and therefore avoided some of the proverbial second system syndrome.. Cutler had been instrumental in building at least two prior operating systems (including the anti-UNIX.. VMS) and Microsoft was keen to divorce itself from OS/2.
With the benefit of hindsight and to clear some misconceptions, OS/2 was actually a nice system but was somewhat doomed both technically and organizationally. Technically, it solved the wrong problem.. it occupies a basically unwanted niche above DOS and below multiuser systems like UNIX and NT.. the same niche that BeOS and classic Mac OS occupied. Organizationally/politically, for a short period it /was/ a "better DOS than DOS and better Windows than Windows" with VM86 and Win API support, but as soon as Microsoft reclaimed their clown car of APIs and apps it would forever be playing second fiddle and IBM management never acknowledged this reality. And that compatibility problem was still a hard one for Microsoft to deal with, remember that NT was not ubiquitous until Windows XP despite being a massive improvement.
For example, the way that command line and globbing works are night and day, and in my mind the Windows approach is far superior. The fact that the shell is expected to do the globbing means that you can really only have one parameter that can expand. Whereas Win32 offers a FindFirstFile/FindNextFile interface that lets command line parameters be expanded at runtime. A missing parameter in unix can cause crazy behavior -- "cp *" but on windows this can just be an error.
On the other hand, the Win32 insistence on wchar_t is a disaster. UTF-16 is ... just awful. The Win32 approach only works if you assume 64k unicode characters; beyond that things go to shit very quickly.
This allows user-space processes to easily manipulate GPU resources, share them between processes if they want, and powers higher level technologies like Direct2D and Media Foundation.
Linux doesn’t have a good equivalent for these. Technically Linux has dma-buf subsystem which allows to share stuff between processes. Unfortunately, that thing is much harder to use than D3D, and very specific to particular drivers who export these buffers.
Cutler knew about microkernels from his work at Digital, OS/2 was a hybrid kernel, and NT was really a rewrite after that failed partnership.
The directory support was targeting Netware etc...
I feel like this is a point for unix. Unix being late to the unicode party means utf-8 was adopted where windows was saddled with utf-16
---
The NT kernel does seem to have some elegance. Its too bad it is not open source; windows with a different userspace and desktop environment would be interesting.
I wish it had branched to one platform as a workstation OS (NTNext).
and one that gets dumber and worse in order to make it work well for gaming. Windows 7/8/10/11 etc,
Technically one would hope that NTNext would be Windows Server, but sadly no.
I remember installing WindowsNT on my PC in awe how much better it was than Dos/Windows3 and later 95.
And compatibility back then was not great. There was a lot that didn't work, and I was more than fine with that.
It could run win32, os/2, and posix and it could be extended to run other systems in the same way.
Posix was added as a nessecity to bid for huge software contracts from the US government, and MS lost a huge contract and lost interest in the POSIX sub system, and in the os/2 sub system.
Did away with it, until they re-invented a worse system for WSL.
Note subsystem in WindowsNT means something very different than subsystem for Linux.
Welp, that's an unfortunate use of that capability given what we see today in language development when it comes to secondary control flows.
Funny, opening the manpage of aio on freebsd you get this on the second paragraph
> Asynchronous I/O operations on some file descriptor types may block an > AIO daemon indefinitely resulting in process and/or system hangs. > Operations on these file descriptor types are considered “unsafe” and > disabled by default. They can be enabled by setting the > vfs.aio.enable_unsafe sysctl node to a non-zero value.
So nothing is safe.
Up until this feature was added, processes in NT were quite heavyweight: new processes would get a bunch of the NT runtime libraries mapped in their address space at startup time. In a picoprocess, the process has minimal ties to the Windows architecture, and this is used to implement Linux-compatible processes in WSL 1.
They sound like an extremely useful construct.
Also, WSL 2 always felt like a "cheat" to me... is anyone else disappointed they went full-on VM and abandoned the original approach? Did small file performance ever get adequately addressed?
Hahaha. Try polling stdin, a pipe, an ipv4 socket, and an ipv6 socket at the same time.
(2) Win NT’s approach to file systems makes many file system operations very slow which makes npm and other dev systems designed for the Unix system terribly slow in NT. Which is why Microsoft gave up on the otherwise excellent WSL1. If you were writing this kind of thing natively for Windows you would stuff blobs into SQLLite (e.g. a true “user space filesystem”) or ZIP files or some other container instead of stuffing 100,000 files in directories.
But as someone who's used all versions of Windows since 95, this paragraph strikes me the most:
> What I find disappointing is that, even though NT has all these solid design principles in place… bloat in the UI doesn’t let the design shine through. The sluggishness of the OS even on super-powerful machines is painful to witness and might even lead to the demise of this OS.
I couldn't agree more. Windows 11 is irritatingly macOS-like and for some reason has animations that make it appear slow as molasses. What I really want is a Windows 2000-esque UI with dense, desktop-focused UIs (for an example, look at Visual Studio 2022 which is the last bastion of the late 1990s-early 2000s-style fan-out toolbar design that still persists in Microsoft's products).
I want modern technologies from Windows 10 and 11 like UTF-8, SSD management, ClearType and high-quality typefaces, proper HiDPI scaling (something that took desktop Linux until this year to properly handle, and something that macOS doesn't actually do correctly despite appearing to do so), Windows 11's window management, and a deeper integration of .NET with Windows.
I'd like Microsoft to backport all that to the combined UI of Windows 2000 and Windows 7 (so probably Windows 7 with the 'Classic' theme). I couldn't care less about transparent menu bars. I don't want iOS-style 'switches'. I want clear tabs, radio buttons, checkboxes, and a slate-grey design that is so straightforward that it could be software-rasterised at 4K resolution, 144 frames per second without hiccups. I want the Windows 7-style control panel back.
All the talk here about how Windows had a better architecture into he beginning conveniently avoids the fact that windows was well known for being over-designed while delivering much less than its counterparts in the networking arena for a long time.
Its not wrong to admire what windows got right, but Unix got so much right by putting attention where it was needed.
Now, that being said, I think the Windows kernel sounded better on paper, but in reality Windows was never as stable as Linux (even in it's early days) doing everyday tasks (file sharing, web/mail/dns server etc.).
Even to this day, the Unix philosophy of doing one thing and doing it well stands: maybe the Linux kernel wasn't as fancy as the Windows one, but it did what it was supposed to do very well.
What if there are just a billion objects and you can't tell which ones need which permission, as an administrator. I couldn't tell if this example actually exists from the article as it only talks abstractly about the subject. But Windows security stuff just sounds like a typical convoluted system that never worked. This is probably one of the one places where UN*X is better off, not that it's any good since it doesn't support any use case other than separating the web server process from the DNS server process, but that it's very simple.
What if the objects do not describe the items I need to protect in sufficient detail? How many privilege escalation / lateral movement vulns were there in Windows vs any UN*X?
Windows currently has a significant scaling issue because of its Processor Groups design, it is actually more of an ugly hack that was added to Windows 7 to support more than 64 threads. Everyone makes bad decisions when developing a kernel, the difference between the Windows NT kernel and the Linux kernel is that fundamental design flaws tend to get eventually fixed in the Linux kernel while they rarely get fixed in the Windows NT kernel.
> Lastly, as much as we like to bash Windows for security problems, NT started with an advanced security design for early Internet standards given that the system works, basically, as a capability-based system.
I’m curious as to why the NT kernel’s security guarantees don’t seem to result in Windows itself being more secure. I’ve heard lots of opinions but none from a comparative perspective looking at the NT vs. UNIX kernels.
Fun fact: NT is a spiritual (and in some cases, literal) successor of VMS, which itself is a direct descendant of the RSX family of operating systems, which are themselves a descendant of a process control family of task runners from 1960. Unix goes back to 1964 - Multics.
Although yeah, Unix definitely has a much longer unbroken chain.
Predicting the imminent demise of Windows is as common, and accurate, of a take as saying this is the year of Linux on the desktop or that Linux takes over PC gaming.
input_string = "VMS"
output_string = ''.join([chr(ord(char) + 1) for char in input_string])
print(output_string)
but i don't agree with is that it was ever more advanced or "better" (in some hypothetical single-dimensional metric). the problem is that all that high minded architectural art gets in the way of practical things:
- performance, project: (m$ shipping product, maintenance, adding features, designs, agility, fixing bugs)
- performance, execution (anyone's code running fast)
- performance, market (users adopting it, building new unpredictable things)
it's like minix vs. linux again. sure minux was at the time in all theoretical ways superior to the massive hack of linux. except that, of course, in practice theory is not the same as practice.in the mid 2000-2010s my workplace had a source license for the entire Windows codebase (view only). when the API docs and the KB articles don't explain it, we could dive deeper. i have to say i was blown away and very surprised by "NT" - given it's abysmal reliability i was expecting MS-DOS/Win 3.x level hackery everywhere. instead i got a good idea of Dave Cutler and VMS - it was positively uniformly solid, pedestrian, uniform and explicit. to a highly disgusting degree: 20-30 lines of code to call a function to create something that would be 1-2 lines of code in a UNIX (sure we cheat and overload the return with error codes and status and successful object id being returned - i mean they shouldn't overlap, right? probably? yolo!).
in NT you create a structure containing the options, maybe call a helper function to default that option structure, call the actual function, if it fails because of limits, it reports how much you need then you go back and re-allocate what you need and call it again. if you need the new API, you call someReallyLongFunctionEx, making sure to remember to set the version flag in the options struct to the correct size of the new updated option version. nobody is sure what happens if getSomeMinorObjectEx() takes a getSomeMinorObjectParamEx option strucure that is the same size as the original getSomeMinorObjectParam struct but it would probably involve calling setSomeMinorObjectParamExParamVersion() or getObjectParamStructVersionManager()->SelectVersionEx(versionSelectParameterEx). every one is slightly different, but they are all the same vibe.
if NT was actual architecture, it would definitely be "brutalist" [1]
the core of NT is the antithesis of the New Jersey (Berkeley/BSD) [2] style.
the problem is that all companies, both micro$oft and commercial companies trying to use it, have finite resources. the high-architect brutalist style works for VMS and NT, but only at extreme cost. the fact that it's tricky to get signals right doesn't slow most UNIX developers down, most of the time, except for when it does. and when it does, a buggy, but 80%, solution is but a wrong stackoverlflow answer away. the fact that creating a single object takes a page of code and doing anything real takes an architecture committee and a half-dozen objects that each take a page of (very boring) code, does slow everyone down, all the time.
it's clear to me, just reading the code, that the MBA's running micro$oft eventually figured that out and decided, outside the really core kernel, not to adopt either the MIT/Stanford or the New Jersey/Berkeley style - instead they would go with "offshore low bidder" style for the rest of whatever else was bolted on since 1995. dave cutler probably now spends the rest of his life really irritated whenever his laptop keeps crashing because of this crap. it's not even good crap code. it's absolutely terrible; the contrast is striking.
then another lesson (pay attention systemd people), is that buggy, over-complicated, user mode stuff and ancillary services like control-panel, gui, update system, etc. can sink even the best most reliable kernel.
then you get to sockets, and realize that the internet was a "BIG DEAL" in the 1990s.
ooof, microsoft. winsock.
then you have the other, other, really giant failure. openness. open to share the actual code with the users is #1. #2 is letting them show the way and contribute. the micro$oft way was violent hatred to both ideas. oh, well. you could still be a commercial company that owns the copyright and not hide the, good or bad, code from your developers. MBAAs (MBA Assholes) strike again.
[1] https://en.wikipedia.org/wiki/Brutalist_architecture [2] https://en.wikipedia.org/wiki/Worse_is_better
VMS POSIX ports:
https://en.m.wikipedia.org/wiki/OpenVMS#POSIX_compatibility
VMS influence on Windows:
https://en.m.wikipedia.org/wiki/Windows_NT#Development
"Microsoft hired a group of developers from Digital Equipment Corporation led by Dave Cutler to build Windows NT, and many elements of the design reflect earlier DEC experience with Cutler's VMS, VAXELN and RSX-11, but also an unreleased object-based operating system developed by Cutler at Digital codenamed MICA."
Windows POSIX layer:
https://en.m.wikipedia.org/wiki/Microsoft_POSIX_subsystem
Xenix:
https://en.m.wikipedia.org/wiki/Xenix
"Tandy more than doubled the Xenix installed base when it made TRS-Xenix the default operating system for its TRS-80 Model 16 68000-based computer in early 1983, and was the largest Unix vendor in 1984."
EDIT: AT&T first had an SMP-capable UNIX in 1977.
"Any configuration supplied by Sperry, including multiprocessor ones, can run the UNIX system."
https://www.bell-labs.com/usr/dmr/www/otherports/newp.pdf
UNIX did not originally use an MMU:
"Back around 1970-71, Unix on the PDP-11/20 ran on hardware that not only did not support virtual memory, but didn't support any kind of hardware memory mapping or protection, for example against writing over the kernel. This was a pain, because we were using the machine for multiple users. When anyone was working on a program, it was considered a courtesy to yell "A.OUT?" before trying it, to warn others to save whatever they were editing."
https://www.bell-labs.com/usr/dmr/www/odd.html
Shared memory was "bolted on" with Columbus UNIX:
https://en.m.wikipedia.org/wiki/CB_UNIX
...POSIX implements setfacl.
My issue with Windows as an OS, is that there's so much cruft, often adopted from Microsoft's older OSes, stacked on top of the NT kernel effectively circumventing it's design.
You frequently see examples of this in vulnerability write-ups: "NT has mechanisms in place to secure $thing, but unfortunately, this upper level component effectively bypasses those protections".
I know Microsoft would like to, if they considered it "possible", but they really need to move away from the Win32 and MS-DOS paradigms and rethink a more native OS design based solely on NT and evolving principles.
Fun facts, to accelerate networking and Internet capability of Windows NT due to the complexity of coding a compatible TCP/IP implementation, the developers just took the entire TCP/IP stacks from BSD, and call it a day since BSD license does allows for that.
They cannot do that with Linux since it's GPL licensed codes, and this probably when the FUD attack started for Linux by Microsoft and the culmination of that is "Linux is cancer" statement by the ex-CEO Ballmer.
The irony is that Linux now is the most used OS in Azure cloud platform (where Butler was the chief designer) not BSDs [2].
[1] History of the Berkeley Software Distribution:
https://en.wikipedia.org/wiki/History_of_the_Berkeley_Softwa...
[2] Microsoft: Linux Is the Top Operating System on Azure Today:
For example.
- Portability. Who cares, even in the 1990s, NetBSD is a thing. We've known learned that portability across conventional-ish hardware doesn't actually affect OS design in vary
Overall, the author is taking jargon / marketing terms too much at face value. In many cases, Windows and Unix will use different terms to describe the same things, or we have terms like "object-oriented kernel" that, by default, I assume don't actually mean anything: "object oriented" was too popular and adjective in the 1990s to be trusted.
- "NT doesn’t have signals in the traditional Unix sense. What it does have, however, are alerts, and these can be kernel mode and user mode."
Topic sentence should not start with the names of things. It is extraneous.
My prior understanding is that NT is actually better in many ways, but I don't feel like I am any closer to learning why.Related
Windows NT for Power Macintosh
The GitHub repository provides information on Windows NT development for Power Macintosh, including ARC firmware, drivers, software compatibility, installation guides, known issues, dual-boot specifics, firmware compilation, and credits.
I Like NetBSD, or Why Portability Matters
NetBSD, founded in 1993, prioritizes portability and simplicity following Unix philosophy. It supports various hardware platforms, emphasizes code quality, and fosters a community valuing system longevity and older tech. NetBSD promotes sustainability and efficiency in software design, offering a cozy, minimal setup for exploration and learning.
Approach used to convert the Hotmail web server farm from Unix to Windows (2002)
Microsoft's conversion of Hotmail from UNIX to Windows 2000 aims to improve performance and hardware utilization. Initial results show better throughput, though Windows' administrative model poses challenges.
Technology history: Where Unix came from
Unix, developed in 1969 by Ken Thompson, evolved from an experiment at Bell Labs. Its design emphasized simplicity, influencing modern Linux systems and establishing foundational commands still in use today.
I Like NetBSD, or Why Portability Matters
NetBSD, released in 1993, emphasizes portability and modular code, supporting various hardware architectures. It offers a user-friendly experience, aligning with sustainability and customization values, despite lower performance compared to other BSDs.