Well, I have blogged about the results of the American Solar Challenge, and produced this summary chart (click to zoom):
I would like to supplement that with some general reflections (as I did in 2016). First, let me complement the ASC organisers on the choice of route. It was beautiful, sunny, and challenging (but not too challenging). Brilliant planning!

The beautiful ASC route (picture credits:
1,
2,
3,
4,
5,
6,
7,
8)
Second, the FSGP/ASC combination worked well, as it always does. Teams inevitably arrive at the track with unfinished and untested cars (App State had never even turned their car on, I am told). The FSGP allows for testing of cars in a controlled environment, and provides some driver training before teams actually hit the road. The “supplemental solar collectors” worked well too, I thought. I was also pleased at the way that teams (especially the three Canadian teams) had improved since 2016.
If one looks at my race chart at the top of this post, one can see that the Challenger class race was essentially decided on penalties. This has become true for the WSC as well. It seems that inherent limits are being approached. If experienced world-class teams each race a world-class car, and have no serious bad luck, then they will be very close in timing, and penalties will tip the balance. For that reason, I would like to see more transparency on penalties in all solar racing events.
I was a little disappointed by the GPS tracker for ASC this year. It was apparently known not to work (it was the same system that had failed in Nebraska in 2016), but people were constantly encouraged to follow teams with it anyway. It would almost have been better to have had no tracker at all, instead just encouraging teams to tweet their location regularly.
Cruiser Scoring
I though Cruiser scoring for ASC 2018 was less than ideal. A great strength of the ASC Challenger class is that even weak teams are sensibly ranked. This was not entirely true for the Cruisers. I would suggest the following Cruiser scoring process:
- Divide person-miles (there’s no point using person-kilometres if everything else is in miles) by external energy input, as in existing scoring
- Multiply by practicality, as in WSC 2019 scoring (for this purpose, it is a good thing that practicality scores are similar to each other)
- Have a target time for Cruiser arrival (53 hours was good) but no low-speed time limit – instead, calculate a lateness ΔH (in hours) compared to the target
- Convert missing distance to additional lateness as if it had been driven at a specified penalty speed, but with no person-mile credit (the ASC seems actually to have done something like this, with a penalty speed around 55 km/h)
- Multiply the score by the exponential-decay term e−ΔH/F, where F is a time factor, measured in hours (thus giving a derivative at the target time of −1/F)
- Scale all scores to a maximum of 1
The chart below applies this suggested process to the ASC 2018 Cruisers, for various choices of penalty speed and time factor F, drawing a small bar chart for each choice. Sensible choices (with a grey background) give each car a score of at least 0.001. It is interesting that all sensible choices rank the cars in the sequence Onda Solare, Minnesota, App State, and Waterloo.
Applied to the WSC 2015 finishers (with a target of 35 hours), penalty speed is obviously irrelevant. A time factor of F = 10 preserves the rankings awarded in that event, while higher time factors would have put Bochum in second place. In that regard, note that regulation 4.4.7 for WSC 2019 is equivalent to a very tough time factor of around 1.66 hours.
Of course, another option would be to return to the additive scoring systems of WSC 2013 and WSC 2015, and this has been suggested.
Strategy
I have posted about basic Challenger strategy. This race illustrated the fact that Cruiser strategy can be more complex. First, it is inherently multi-objective. Teams must carry passengers, drive fast, and conserve energy. Those three things are not entirely compatible.
Second, even more than in the Challenger class, the Cruiser class involves decision-making under uncertainty. In this event, teams could build up a points buffer early on (by running fully loaded without recharging, planning on speeding up later if needed). Alternatively, and more conservatively, teams could build up a time buffer early on (by running fast and recharging, in case something should go wrong down the track). Both Minnesota and Onda chose to do the former (and, as it happened, something did go wrong for Minnesota). In the Challenger class it is primarily weather uncertainty that requires similar choices (that was not a factor in this wonderfully sunny event).
Third, even more than in the Challenger class, psychological elements come into play. Onda were, I think, under some pressure not to recharge as a result of Minnesota not recharging. In hindsight, under the scoring system used, Onda could have increased their efficiency score by recharging once, as long as that recharge made them faster by at least 3 hours and 36 minutes (not that it mattered in the end, since all teams but Onda were given a zero efficiency score).
Together, factors such of these underscore the need to have a good operations analyst on the team, especially in the Cruiser class.
Media Coverage Summary
Like this:
Like Loading...