Here is the calendar for September (click for hi-res image). Two anniversaries coming up. See more calendars here.
I have been reading a few steampunk novels lately – I have a great fondness for the genre. Charles Babbage’s planned “Difference Engine” and “Analytical Engine” always play a large part in the fictional universe of such books. However, as Francis Spufford has pointed out, this does rely on some counterfactual history.
Babbage never completed any of his major devices, although redesigned working difference engines were built by Per Georg Scheutz (1843), Martin Wiberg (1859), and George B. Grant (1876). With much fanfare, the Science Museum, London reconstructed Babbage’s “Difference Engine No. 2” between 1985 and 2002, making only essential fixes to the original design – and it works! However, the pinnacle of this kind of technology was probably the beautiful handheld Curta calculator, produced in Liechtenstein by Curt Herzstark from 1947.
The world’s first programmable digital computer was in fact built four years before the Curta, in 1943, by English electrical engineer Tommy Flowers. The wartime secrecy associated with his work has kept this monumental achievement largely in the dark.
The significance of the Colossus has also been obscured by a kind of “personality cult” built up around Alan Turing, much like the one built up around Babbage. Turing was one of a number of people who contributed to the design of the cryptographic “Bombe” at Bletchley Park, and Turing also did important theoretical work – although the fundamental result in Turing’s 1936 paper, “On Computable Numbers, with an Application to the Entscheidungsproblem” was not actually new, as is revealed on the second page of Turing’s paper, where Turing admits “In a recent paper Alonzo Church † has introduced an idea of ‘effective calculability,’ which is equivalent to my ‘computability,’ but is very differently defined. Church also reaches similar conclusions about the Entscheidungsproblem ‡. The proof of equivalence between ‘computability’ and ‘effective calculability’ is outlined in an appendix to the present paper.”
Turing’s life was more colourful than either Church’s or Flowers’s, however, and this may be why he is far more famous. In a similar way, Babage lived a more colourful life than many of his contemporaries, including his collaboration with the forward-thinking Countess of Lovelace.
1: Charles Babbage, 2: Augusta Ada King-Noel (née Byron, Countess of Lovelace), 3: Alonzo Church, 4: Alan Turing, 5: Tommy Flowers
The chart below (click to zoom) puts the work of Babbage and Flowers in a historical context. Various devices are ranked according to their computational power in decimal digits calculated per second (from 1 up to 1,000,000,000,000,000). Because this varies so dramatically, a logarithmic vertical scale is used. The Colossus marks the beginning of a chain of “supercomputers,” often built for government use, with power doubling every 1.84 years (pink line). Starting with the Intel 4004 in 1971, there is also a chain of silicon chips, with power doubling every 1.74 years (blue line). At any given point in time, supercomputers are between 1,000 and 3,000 times more powerful than the chips, but the chips always catch up around 20 years later. The revolutionary PDP-8 of 1965 sits between the two chains.
One of the things that stand out on this chart is the gap between Babbage’s Difference Engine and the later digital computers – even the Colossus was around 280 times more powerful than the Difference Engine (carrying out a simpler task much more quickly). Steampunk fiction often suggests that steam power would have made the Difference Engine faster. However, it turns out that the mechanism jams if it is cranked too quickly. Complex mechanical calculating devices simply cannot operate that fast.
Morse telegraph key (photo: Hp.Baumeler)
In fact, Charles Babbage may actually have distracted people from the way forward. Samuel Morse’s improved telegraph was officially operational in 1844. It used electromechanical principles that were also used in the Colossus a century later. Electricity also has the advantage of travelling at the speed of light, along wires that can be made extremely thin. What might the world have been like had electromechanical computing developed earlier? The chart also shows the 1964 fluidic computer FLODAC. This was a fascinating idea that was abandoned after a successful proof of concept (although a 1975 film portrayed it as the future). What if that idea had been launched in Victorian Britain?
Here is my personal world ranking of the top twenty Challenger-class solar cars. It was produced entirely algorithmically by using linear regression on historical data to build mappings between WSC rankings and those of other races, and then applying those mappings to the results of four recent events (SASOL 16, ESC 16, WSC 17, and ASC 18). There is as yet insufficient data to rate Cruiser-class teams (apart from the actual WSC 17 results: 1 Eindhoven, 2 Bochum, 3 Arrow).
|1||Nuon Solar Team||1||1|
|2||University of Michigan||2||2|
|3||Solar Team Twente||1||5|
|4||Punch Powertrain Solar Team||2||3|
|6||Western Sydney Solar Team||6||1|
|8||Kecskemét College GAMF (Megalux)||3|
|9||JU Solar Team||8|
|10||Stanford Solar Car Project||9|
|11||Antakari Solar Team||10|
|12||North West University||4||P|
|13||University of Toronto (Blue Sky)||11|
|14||ETS Quebec (Eclipse)||3|
|15||Nagoya Institute of Technology||12|
|16||Istanbul Technical University (ITU)||7||P|
|17||Poly Montreal (Esteban)||4|
|18||Solar Energy Racers||8|
|19||Massachusetts Institute of Technology||5|
|20||Dokuz Eylül University (Solaris)||9|
Note that, for ESC 16, the 3rd, 4th, and 5th place cars were all Bochum Cruisers and are therefore not listed here, while 6th was Onda Solare, which is now also a Cruiser team. The letter P marks cars that participated in WSC 17, but did not finish, and thus were not ranked. It must also be said that Eclipse, Esteban, and MIT should probably be ranked higher than they are here – the algorithm is not taking into account the dramatic improvement in ASC teams this year.
Time for something not about solar cars…
Revisiting my post on the R100 airship, here is a more detailed aircraft size comparison (click to zoom). All aircraft are to scale.
Well, I have blogged about the results of the American Solar Challenge, and produced this summary chart (click to zoom):
I would like to supplement that with some general reflections (as I did in 2016). First, let me complement the ASC organisers on the choice of route. It was beautiful, sunny, and challenging (but not too challenging). Brilliant planning!
Second, the FSGP/ASC combination worked well, as it always does. Teams inevitably arrive at the track with unfinished and untested cars (App State had never even turned their car on, I am told). The FSGP allows for testing of cars in a controlled environment, and provides some driver training before teams actually hit the road. The “supplemental solar collectors” worked well too, I thought. I was also pleased at the way that teams (especially the three Canadian teams) had improved since 2016.
Supplemental solar collectors for Poly Montreal (picture credit)
If one looks at my race chart at the top of this post, one can see that the Challenger class race was essentially decided on penalties. This has become true for the WSC as well. It seems that inherent limits are being approached. If experienced world-class teams each race a world-class car, and have no serious bad luck, then they will be very close in timing, and penalties will tip the balance. For that reason, I would like to see more transparency on penalties in all solar racing events.
I was a little disappointed by the GPS tracker for ASC this year. It was apparently known not to work (it was the same system that had failed in Nebraska in 2016), but people were constantly encouraged to follow teams with it anyway. It would almost have been better to have had no tracker at all, instead just encouraging teams to tweet their location regularly.
I though Cruiser scoring for ASC 2018 was less than ideal. A great strength of the ASC Challenger class is that even weak teams are sensibly ranked. This was not entirely true for the Cruisers. I would suggest the following Cruiser scoring process:
- Divide person-miles (there’s no point using person-kilometres if everything else is in miles) by external energy input, as in existing scoring
- Multiply by practicality, as in WSC 2019 scoring (for this purpose, it is a good thing that practicality scores are similar to each other)
- Have a target time for Cruiser arrival (53 hours was good) but no low-speed time limit – instead, calculate a lateness ΔH (in hours) compared to the target
- Convert missing distance to additional lateness as if it had been driven at a specified penalty speed, but with no person-mile credit (the ASC seems actually to have done something like this, with a penalty speed around 55 km/h)
- Multiply the score by the exponential-decay term e−ΔH/F, where F is a time factor, measured in hours (thus giving a derivative at the target time of −1/F)
- Scale all scores to a maximum of 1
The chart below applies this suggested process to the ASC 2018 Cruisers, for various choices of penalty speed and time factor F, drawing a small bar chart for each choice. Sensible choices (with a grey background) give each car a score of at least 0.001. It is interesting that all sensible choices rank the cars in the sequence Onda Solare, Minnesota, App State, and Waterloo.
Applied to the WSC 2015 finishers (with a target of 35 hours), penalty speed is obviously irrelevant. A time factor of F = 10 preserves the rankings awarded in that event, while higher time factors would have put Bochum in second place. In that regard, note that regulation 4.4.7 for WSC 2019 is equivalent to a very tough time factor of around 1.66 hours.
I have posted about basic Challenger strategy. This race illustrated the fact that Cruiser strategy can be more complex. First, it is inherently multi-objective. Teams must carry passengers, drive fast, and conserve energy. Those three things are not entirely compatible.
Second, even more than in the Challenger class, the Cruiser class involves decision-making under uncertainty. In this event, teams could build up a points buffer early on (by running fully loaded without recharging, planning on speeding up later if needed). Alternatively, and more conservatively, teams could build up a time buffer early on (by running fast and recharging, in case something should go wrong down the track). Both Minnesota and Onda chose to do the former (and, as it happened, something did go wrong for Minnesota). In the Challenger class it is primarily weather uncertainty that requires similar choices (that was not a factor in this wonderfully sunny event).
Third, even more than in the Challenger class, psychological elements come into play. Onda were, I think, under some pressure not to recharge as a result of Minnesota not recharging. In hindsight, under the scoring system used, Onda could have increased their efficiency score by recharging once, as long as that recharge made them faster by at least 3 hours and 36 minutes (not that it mattered in the end, since all teams but Onda were given a zero efficiency score).
Together, factors such of these underscore the need to have a good operations analyst on the team, especially in the Cruiser class.
Media Coverage Summary
- American (KTVZ TV)
- Australian (text & photos from the ABC)
- Italian (text)
- Italian (audio interview, with some familiar images)
- University of Michigan (inside story)