When I picked up the Sunday NY Times in my driveway on November 7, 2015, I believed the world was about to change. That morning, the Times, in partnership with Google, distributed 1.3 million Cardboard VR headsets. A generation of people would be exposed to VR and the world would never be the same.
Between 2020 and 2022, Meta/Facebook has sold 15 million Quest 2 headsets. We are square in the middle of a new generation of VR usage. (We have 3 Quest 2 headsets in our house.)
This upsurge of interest in virtual reality has me nostalgic for my time in the first VR wave, back in the 1990s (technically, this might have been the second or third wave) but when I started to research on the web, you would see names like Jaron Lanier, Ivan Sutherland, John Carmack and even the Nintendo Virtual Boy, I was upset that the articles din’t mention my corner of the universe (Sense8, Gemini, etc.)
I realized that a good reason these stories didn’t cover Sense8 and Gemini was that there are very few artifacts available on the web. No images. No videos. No stories. So the goal of this blog is to rectify that situation. I found a collection of old DVD with source code, executables, demos and notes. I am going to cull through this info a post what I remember…and what ever I can actually run on computer 25 years in the future.
I found a number of Sense8/WTK demos in a directory labeled “radiocity”. A number of the files were in NFF format while others were WRL (VRML) format. The NFF files were references in a readme noting they were developed in “Real Light from ATMA Rendering Systems srl in Milan, Italy”. I am pretty sure the WRL files we examples from Lightscape.
Radiosity involved pre-calculating lighting effects into the model. In the NFF models, this was captured in the color of the individual polygons.
This sounded quite similar to what is now known as “baking in” the lighting. I asked chat.openai.com to explain the difference:
Radiosity was one of the first techniques used in baked lighting, although it has been updated and improved over time. The radiosity method was first developed in the 1980s as a way of simulating the diffuse global illumination of a 3D environment. It was initially used in computer graphics and animation, and it was later adapted for use in game development.
In the early days of game development, radiosity was used to calculate the static lighting of a scene, which was then baked into the game engine. This allowed games to have more realistic lighting effects, without the need for real-time lighting calculations, which were computationally expensive and would have slowed down the game.
However, as game technology has advanced and become more powerful, more advanced lighting techniques and algorithms have been developed, such as photon mapping, path tracing, and voxel cone tracing. These methods can produce more accurate and realistic lighting effects, but they also require more computational power. As a result, radiosity is still used in some game engines, but it is not the only method used for baked lighting.
You can see in the wireframe rendering of “labV1.wrl” how the lighting is captured directly into the geometry model (particularly in the upward wall sconces and the reflected sunlight around the windows).
Roy Latham’s Real Time Graphics newsletter from April/May 1998 featured one of the radiosity models, touting the capabilities of the a new ELSA board:
It was great to find this same model in the radiocity directory!
I recall this being a pretty popular “benchmark” model at the time. Surprisingly, the only other current reference to this model I can find is in a 1996 VRML whitepaper.
The following is courtesy of Professor Robert Stone, who led this project for VR Solutions/Virtual Presence in the mid 1990’s.
English Heritage: Virtual Stonehenge
“…the largest and most challenging PC-based heritage reconstruction carried out to date”
Virtual Heritage Conference & Exhibition, December 1996
In 1995, English Heritage completed the most intensive survey of the Stonehenge area ever undertaken, generating a large database of information. It is the nature of databases that, whilst they contain much information that is significant or useful, this information is difficult to differentiate. English Heritage saw VR as a possible solution to their problem. The brief to VP Group was to produce a high quality and accurate record of the stones and their environs in their present state. Whilst not designed to replace the real experience, the visualisation was to be detailed enough to allow people to “walk” amongst the stones and inspect the different textures in 3D – something the general public is no longer allowed to do.
During the initial project review stage, Intel Corporation (UK) approached English Heritage with an offer to co-sponsor the Project, through their Community Liaison Programme. In conjunction with Intergraph (UK), Intel selected the Pentium Pro-based TDZ/GLZ
Workstation series, on which the model was to be developed and finally demonstrated. Before the team could begin the time-consuming process of inputting all the information from English Heritage’s digital survey into Sense8’s VR package, WorldToolKit, a surface representation of each stone was manually built up from point data extrapolated from hundreds of stereo photographs. Around 60,000 points made up each of the 80 or so surveyed stones. This figure had to be reduced to allow real-time rendering to take place on the target Intergraph platforms. A painstaking manual process gradually “decimated” these point data. The result was 10 separate models of each stone, the level of detail on each chosen to correspond to a variety of end user viewing distances – the further away, the lower the level of detail. At run-time the software selects the 5 most appropriate levels of detail, based on the characteristics of the computer being used.
One from each of a stereo pair of photographs was then digitised, processed and texture mapped onto the geometry of the relevant stone. Even small surface features such as cracks, lichens and fungi are clearly visible. The full version of the Stonehenge model requires 80 Mb of texture RAM. Lower resolution versions (26 Mb and 8 Mb have, however, been produced). Stonehenge’s virtual landscape was created from digital topographic information derived from aerial photography and boasts all the features contained within the real area – barrows, ditches, roads, the Avenue and the current Visitors’ Centre. In geometric terms, the entire model contained 50,000 polygons – 40,000 of these described the stones and immediate terrain, the remaining 10,000 occupying the more distant terrain (area: 2.5 x 2.5 kilometres).
Other historical features – ditches, banks and the like have been geometrically exaggerated, otherwise they would not be visible to the user when at normal eye height in the virtual world. It took four developers six months to complete the Project. Virtual Stonehenge was launched at the London Planetarium on June 20, 1996, a few hours before the actual Summer Solstice. Following a description of the Solstice by the renowned astronomer and celebrity Patrick Moore, English Heritage’s Chairman, Sir Jocelyn Stevens, donned a VR headset and set off to explore Virtual Stonehenge, pausing to view the night-time sky (see below), the real-time sunrise (see below) and to remove all 20th Century, man-made artefacts, returning the site to a near-“virgin” condition, as is planned for the year 2000.
Astronomical Mapping. The basic source data for the star positions were originally downloaded from the Internet using Right Ascension and Declination for stars with a greater Apparent Visual Magnitude than 3.55. This form was chosen, as it was not practical to represent the stars according to their “real world” positions (in this case the virtual world would have had a bounding box measured in light years!). Right Ascension is measured in hours (24) and had to be converted to degrees, and Declination was measured in degrees (-90o to +90o). These can be thought of as the longitude and latitude lines that span the Earth. The star positions were then projected onto a sphere surrounding the Stonehenge model from the celestial equator (ie. centre of the earth). Once the stars were spherically projected, they were scaled according to their Apparent Visual Magnitude. The position of Stonehenge from the centre of the Earth then had to be taken into consideration as the “star sphere” was being projected from the celestial equator. This involved shifting the whole star sphere and then spinning the sphere around an axis close to the North Star.
Sunrise Effect. Various methods of achieving a real-time sunrise effect were discussed. It was decided that a method based on using smooth shaded ellipses would take full advantage of the Intergraph hardware and Sense8’s WorldToolKit. For the sunrise effect to work, there were two objects primarily interacting with each other – the hemisphere surrounding Stonehenge (the sky) and a “virtual” sun. The sun had various parameters which could be set. The actual sun object can be thought of as a number of bands of differing circumference (each representing a different fixed colour) centred around a point. If one imagines a point travelling from the centre of the bands to the outside, the colour of the point would then gradually change. If the distance between bands is increased and the point is travelling at the same speed, then the perceived change in colour be would less obvious. This is useful when one wishes to fade gradually from night to day over a long period, and for the extreme effects when the fade changes from red to yellow over very short distances. The interpolation between the different colour bands was computed and stored in a colour look-up table to optimise execution time. Each point in the hemisphere (sky dome) was individually coloured according to its distance from the sun. This was achieved through interaction between the sun object and the sky dome. The virtual sun was initially placed at its furthest band distance from the dome and then gradually moved inwards. This created the essence of the sunrise effect. To enhance the effect of a genuine sunrise further, the spheres (bands) were changed to ellipses, thus recreating the atmospheric refraction that is seen on Earth. Through the use of an ASCII text file, the developers were then allowed to experiment with various parameters for each band (eg. the number of band, the colour, individual radii, the number of colour interpolations and the three dimensional elliptical shape used). By using this method it meant the demonstration takes advantage of Gouraud shading, thereby using a minimum amount of texture memory and keeping run-time efficiency at an optimum level.
In 1996, Intel and the English Heritage foundation sponsored a virtual reality simulation of Stonehenge build with WorldToolkit.
According to Robert Stone’s YouTube post, this was “the first ever Virtual Stonehenge demo created for English Heritage and presented by the late Sir Patrick Moore at the 1996 Summer Solstice (June) at the London Planetarium”, while a subsequent post noted this was “merely to publicize Intel’s Pentium Pro-based TDZ/GLZ Workstations”
From what I can tell from examining the various datafile included in the demo, a lot of attention was paid to recreating the star fields above the ancient monument along with the sunrise…all intrinsic elements of the monument.
Getting the application to work again was pretty challenging. Many of the file locations were hardcoded to specific directories. I actually needed to use a hex editor on the executable to find some of these locations. And while the monument was modeled with 4 “levels of detail” for some reason, as I approached the monolith, the stones would mysteriously disappear.
And while I have always been fascinated by Stonehenge, what really intrigues me about this model is the recreation of the Visitor Center. Access to the monument has been greatly curtailed. While the monument has remained the same for centuries, the visitor center from 1996 is long gone, replaced sometime in 2013.
I am not sure if these turnstiles are from this picture, but since you are no longer allowed to physically approach the stones, I am certain they have been permanently removed.
The simulation includes a model of walkway/ramp the allowed visitors to safely walk under the nearby A360 motorway. It includes an image of the complete monument while the recreation could be seen in the distance.
According to Professor Bob Stone, creator of the experience, while primarily educational, part of the reason for the project was to explore the idea of removing the old Visitor Center.
Prof Bob Stone – “Yep, I led this project with the VR Solutions/Virtual Presence lot, liaising with English Heritage to get the stone circle and historical elements right (before it was hijacked by Intel to show off their Pentium Pro chipset). Still have the project details, images and grainy video of the project too!”
Andrew Connell – “That was one of the last Sense8 apps I wrote myself. I remember late night hacking to get the sunrise and star effects working for the planetarium launch. We used the phrase ‘mathematically accurate’ on the press info at the time, but mostly I just fiddled with it until I liked the effect! But it was an acheivement to get it all working back in 95 using laser scan models and decent resolution unique image sets for every stone.”
SpaceRocks was my personal project. It was a way for me to learn WorldToolKit and morphed into a useful demo that showcased many WTK features. As new features were added, like DirectX support or specialized sound, I would incorporate them into SpaceRocks.
Much like Rover and Sailing, it was ported to virtually every platform supported by WTK, from SGI Reality Engines, to Sun workstations to the latest PC board from 3DLabs.
SpaceRock was an homage to the classic game Asteriods taken to the next level with textures from real asteroids and deep space objects from the Hubble Space Telescope.
As time went on, I added additional features. In the image above, I used the ability to layer on a 2D image representing a cockpit viewport.
Years ago, I built a website for SpaceRocks on my personal website:
The original README file includes a bunch of notes on the features I added over time as some background on the app. This little application showcased a long list of technologies:
Real-time 3D graphics
Utilizes hardware acceleration
Hierarchical Scene Graph
Dynamic material definition
Selectable texture filtering
Multiple rendering modes
Spatialized 3D sound
Cross Platform (WinNT/Win95/SGI/Sun/DEC)
Performance independant motion
Dialog Boxes (cross platform)
Object Oriented design
Per Object Data
User (re)defined sensor model (ie myMouse)
Stereo viewing options
Support for multiple sound devices
Support for popular VR peripherals
Models modifiable at runtime
Reads VRML files across the internet
Sensors modifiable at runtime
Sound modifiable at runtime
Network enabled (coming soon)
Multiple simultaneous viewpoints
Dynamic window creation
With the release of the Oculus Quest 2 headset in 2020 and the availability of Unity, I decided to re-envision SpaceRocks. While it doesn’t really look anything like the original, it captures many of the same ideas.
And yes, this version includes a handheld llama that fires asteroid killing energy balls.
SpaceRocksVR also has its own blog post on my personal site:
At Siggraph97 (August 3-8, 1997 in LA), along with the Cave of Lascaux exhibit, Sense8 also debuted an experience base on data from NASA’s Mars Pathfinder mission. In the CAVE, visitors were able to step foot on the red planet and explore the area in vivid 3D.
I was excited to find this article in a 1999 issue of Journal of Geophysical Research.
Here is another paper from the NASA website. And one from the Fifth International Conference on Mars
I honestly don’t remember ever interacting with the researchers, but I have reached out to the people I could find on LinkedIn. I hope to hear from them and would love to share their memories/thoughts on the project. Here are their names and LinkedIn profiles:
Clearly, this demo was “hot off the presses” and I recall how excited we were to be able to showcase this application/date . Prior to the landing on July 4, 1997, Sense8 released this press release (thanks to HPCWire). Siggraph was only 2 months later in August.
June 27, 1997
Mill Valley, CA -- As part of the objective of the Mars Pathfinder Mission, NASA will use simulation application software built with WorldToolKit from SENSE8 Corporation to create an interactive photo-realistic environment of Mars. When the Mars Pathfinder spacecraft lands on July 4, it will release a single vehicle microrover -- Sojourner -- equipped with a pair of stereoscopic cameras and other sensors onto the Mars surface. These instruments will allow the Sojourner to investigate the geology, surface morphology, rotational, and orbital dynamics of Mars.
The dual cameras will take stereoscopic images of Mars and send them back to the Mission Control at NASA, where these images will be converted into 3D-Martian terrain geometry using a WorldToolKit-based application. The application will then texture-map these images onto the 3D terrain and create a virtual Martian environment. This WorldToolKit application will allow NASA scientists to interactively explore the terrain in real-time and send the Sojourner to specific areas for further investigation.
"NASA has developed applications using WorldToolKit for several scientifc research projects in the past, including the Viking 1 mission, which is being displayed at the Smithsonian Institution's National Air & Space Museum," said Daryl Rasmussen, telepresence researcher who heads the Mars Virtual Control Center at NASA Ames Research Center. "WorldToolKit enables Mission Control scientists to become 'virtual astronauts'. The NASA developed application not only allows us to view 3D images taken from Mars, it also enables us to fully immerse ourselves into a virtual Martian environment."
"SENSE8 is very excited to once again contribute our technology to the field of scientific research," said Tom Coull, president of SENSE8. "WorldToolKit offers developers a rich set of 3D graphics and sound capabilities, which has enabled NASA to quickly prototype and develop this mission-critical application."