The Evolution of Research in Computer Science

The Evolution of Research in Computer Science: From 12-bit Beginnings to 64-bit Innovations

In developing products the normal stages are typically research, advanced development, product development, manufacturing engineering, customer serviceability engineering, product release, product sustainability, product retirement. In the modern day of agile programming and “devops” some or all of these steps are often blurred, but the usefulness of all of them is still recognized, or should be.

Today we will concentrate on research, which is the main reason why I gave my support to the Linux/DEC Alpha port in the years of 1994 to 1996, even though my paid job was to support and promote the 64-bit DEC Unix system and (to a lesser extent) the 64-bit VMS system. It is also why I continue to give my support to Free and Open Source Software, and especially Linux, after that.

In early 1994 there were few opportunities for a truly “Open” operating system. Yes, research universities were able to do research because of the quite liberal university source code licensing of Unix systems, as well as governmental research and industrial research. However the implementation of that research was still under control of the commercial interests in computer science, and speed of taking research to development to distribution was relatively slow. BSD-lite was still not on the horizon as the USL/BSDI lawsuit was still going on. MINIX was still hampered by its restraint to educational and research uses (not solved until the year 2000). When you took it all into consideration, the Linux Kernel project was the only show in town especially when you took into account that all of its libraries, utilities and compilers were already 64-bit in order to run on Digital Unix.

Following on close to the original deportation of GNU/Linux V1.0 (starting in late 1993 with distributions such as Soft Landing Systems, Yggdrasil, Debian, Red Hat, Slackware and others) was the need for low-cost flexible supercomputers, initially called Beowulf systems. Donald Becker and Dr. Thomas Sterling codified and publicized the use of commodity hardware (PCs) and Free Software (Linux) to replace uniquely designed and manufactured supercomputers to produce systems that could deliver the power of a supercomputer for approximately 1/40th of the price. In addition, when the initial funding job of these computers was finished, the computer could be re-deployed to other projects either in whole, or by breaking apart into smaller clusters. This model eventually became known as “High Performance Computing” (HPC) and the 500 world’s fastest computers use this technology today.

Before we get started in the “why” of computer research and FOSS we should take a look at how “Research” originated in computer science. In computer science research was originally done only by the entities that could afford impossibly expensive equipment, or could design and produce its own hardware. These were originally research universities, government and very large electronics companies. Later on smaller companies sprung up that also did research. Many times this research generated patents, which helped to fuel the further development of research.

Eventually the area of software extended to entities that did not have the resources to purchase their own computers. Microsoft wrote some of their first software on machines owned by MIT. The GNU tools were often developed on computers that were not owned by the Free Software Foundation. Software did not necessarily require ownership of the very expensive hardware needed in the early days of computers. Today you could do many forms of computer science research on an 80 USD (or cheaper) Raspberry Pi.

Unfortunately today many companies have retired or greatly reduced their Research groups. Only a very few of them do “pure research” and even fewer license their research out to other companies on an equitable basis.

If you measure research today, using patents as a measure, more than 75% of patents are awarded to small and medium sized companies, and the number of patents awarded per employee is astonishing when you look at companies that have 1-9 employees. While it is true that large companies like Google and Apple apply and receive a lot of patents overall, the number of patents per employee is won by small to medium companies hands down. Of course many readers of this do not like patents, and particularly patents on software, but it is a way of measuring research and it can be shown that a lot of research is currently done today by small companies and even “lone wolves”.

By 1994 I had lived through all of the major upgrades to “address space” in the computer world. I started with a twelve-bit address space (4096 twelve-bit words in a DEC PDP-8) to a 24-bit address space (16,,777,216 bytes) in an IBM mainframe to 16 bits ( 65,536 bytes) in the DEC PDP-11 to 32 bits (4,294,967,296 bytes) in the DEC VAX architecture. While many people never felt really constrained by the 32-bit architecture, I knew of many programmers and problems that were.

The problem was with what we call “edge programming”, where the dataset that you are working with is so big you can not have it all in the same memory space. When this happens you start to “organize” or “break down” the data, then program to transfer results from address space to address space. Often this means you have to save the meta data (or partial results) from one address space, then apply it to the next address space. Often this causes problems in getting the program correct.

What types of programs are these? Weather forecasting, climate study, genome research, digital movie production, emulating a wind tunnel, modeling an atomic explosion.

Of course all of these are application level programs, and any implementation of a 64-bit operating system would probably serve the purpose of writing that application.

However many of these problems are on a research level and whether or not then finished application was FOSS, the tools used could make a difference.

One major researcher in genome studies was using the proprietary database of a well-known database vendor. That vendor’s licensing made it impossible for the researcher to simply image a disk with the database on it, and send the backup to another researcher who had the same proprietary database with the same license as the first researcher. Instead the first researcher had to unload their data, send the tapes to the second researcher and have the second researcher load the tapes into their database system.

This might have been acceptable for a gigabyte or two of data, but was brutal for the petabytes (one million gigabytes) of data that was used to do the research.

This issue was solved by using an open database like MySQL. The researchers could just image the disks and send the images.

While I was interested in 64-bit applications and what they could do for humanity, I was much more interested in 64-bit libraries, system calls, and the efficient implementation of both, which would allow application vendors to use data sizes almost without bound in applications.

Another example is the rendering of digital movies. With analog film you have historically had 8mm, 16mm, 32mm and (most recently) 70 mm film and (of course) in a color situation you have each “pixel” of the color have (in effect) infinite color depth due to the analog qualities of film. With analog film this is also no concept of “compression” from frame to frame. Each frame is a separate “still image”, which our eye gives the illusion of movement.

With digital movies there are so many considerations that it is difficult to say what the “average” size of a movie or even one frame. Is the movie wide screen? 3D? Imax? Standard or High definition? What is the frame rate and the length of the video? What is the resolution of each frame?

We can get an idea of how big these video files could be are (for a 1 hour digital movie): 2K – 3GB, 4K – 22GB and 8K – 40GB in size. Since a 32 bit address space allows either 2GB or 4GB of address space (depending on implementation) at most you can see that fitting even a relatively short “low technology” film into memory at one time.

Why do you need the whole film? Why not just one frame at a time?

It has to do with compression. Films are not sent to movie theaters or put onto a physical medium like Blu-Ray in a “raw’ form. They are compressed with various compression techniques through the use of a “codec”, which uses a mathematical technique to compress, then later decompress, the images.

Many of these compression techniques use a “difference” between a particular frame used as a base and the differences applied over the next couple of (reduced size) frames. If this was continued over the course of the entire movie the problem comes when there is some glitch in the process. How far back in the file do you have to go in order to fix the “glitch”? The answer is to store another complete frame every so often to “reset” the process and start the “diffs” all over again. There might be some small “glitch” in the viewing, but typically so small no one would notice it.

Thrown in the coordination needed by something like 3D or Imax, and you can see the huge size of a movie cinematic event today.

Investigating climate change, it is nice to be able to access in 64-bit virtual memory over 32,000 bytes for every square meter on the surface of the earth including all the oceans.

When choosing an operating system for doing research there were several options.

You could use a closed source operating system. You *might* be able to get a source code license, sign a non-disclosure contract (NDA) , do your research and publish the results. The results would be some type of white-paper delivered at a conference (I have seen many of these white-papers) but there would be no source code published because that was proprietary. A collaborator would have to go through the same steps you did to get the sources (if they could), and then you could supply “diffs” to that source code. Finally, there was no guarantee that the research you had done would actually make it into the proprietary system….that would be up to the vendor of the operating system. Your research could be for nothing.

It was many years after Windows NT running as a 32-bit operating system on the Alpha that Microsoft released a 64-bit address space on any of their operating systems. Unfortunately this was too late for Digital, a strong partner of Microsoft, to take advantage of the 64-bit address space that the Alpha facilitated.

We are entering an interesting point in computer science. Many of the “bottlenecks” of computing power are, for the most part, overcome. No longer do we struggle over issues of having single-core, 16-bit monolithic sub-megabyte memory hardware running at sub-1MHz clock speeds that support only 90KB floppy disks. Today’s 64-bit, multi-core, multi-processor, multiple GigaByte memories with solid-state storage systems and multiple Gbit/second LAN networking fit into laptops, much less server systems gives a much more stable basic programming platform.

Personally I waited for a laptop that would support USB 40 Gbit per second and things like WiFi 7 before I purchased what might be the last laptop that I purchase in my lifetime.

At the same time we are moving from when SIMD means more than GPUs that can paint screens very fast, but are moving into MIMD programming hardware, with AI and Quantum computing pushing the challenges of programming even further. All of these will take additional research of how to integrate these into average-day programming. My opinion is that any collaborative research, to facilitate a greater chance of follow-on collaborative advanced development and implementation must be done with Free and Open Source Software.

About Jon "maddog" Hall:

Jon "maddog" Hall is the Board Chair Emeritus of the Linux Professional Institute. Since 1969, Mr. Hall has been a programmer, systems designer, systems administrator, product manager, technical marketing manager, author and educator, currently working as an independent consultant. Mr. Hall has concentrated on Unix systems since 1980 and Linux systems since 1994, when he first met Linus Torvalds and correctly recognized the commercial importance of Linux and Free and open source Software. As the Executive Director of Linux International(TM), Mr. Hall has traveled the world speaking on the benefits of open source Software having received his BS in Commerce and Engineering from Drexel University, and his MSCS from RPI in Troy, New York.

Leave a Reply

Your email address will not be published. Required fields are marked *