What Happens when the Algorithms Go Wrong?
This past weekend it was bright and sunny in Ocean City, MD. On a beautiful Labor Day weekend, the town was largely empty compared to the record crowds that normally flock to the beach on the last weekend of summer. Instead, as cars streamed out of the Eastern Shore, Asplundh drivers and DelMarVa Power trucks congregated at Shopping Centers just outside the city for the hurricane that didn’t happen.
We’ve been using Predictive Analytics for thousands of years. Aristotle wrote a paper in 340 BC that postulated about rain, clouds, thunder, lightning and hail – much of which was absolutely wrong. Today, we try to predict much more than the weather. We leverage access to real time and historical data through advanced computing and complex algorithms. We try to determine what people will buy, where they’ll go and who they’ll do it with across a broad spectrum of space and time.
Yet, these predictions have costs. The hotels posted vacancy signs, the restaurants and bars were half filled and the roads flowed freely with little traffic. The “Sunniest Hurricane” left the normally bustling town with disappointed vendors, empty tax coffers and power line workers on time and half looking for an emergency that never came.
It’s not that the storm didn’t happen….it did. The seas were churning, the breeze was blowing and Hurricane Hermine filled the radar with rain……just not where the forecasters predicted, but out to sea tantalizingly close enough to impact the waters around an otherwise dry Ocean City. The storm that had wreaked havoc through Florida and the Carolinas chose to ignore the more populous cities of the Northeast.
As we continue to make more and more predictions, our accuracy gets better and better. Lives are saved as hurricanes, tornadoes and storm predictions improve. Yet, as we become more and more mesmerized with figuring out the future when we sometimes don’t even know what really happened in the past, a missed storm prediction is a reminder that sometimes we get the algorithms wrong and those mistakes have costs.
Data scientists are flocking to new areas of study trying to tell us what’s going to happen before it does. We’re using these predictions to create intervention strategies that try to alter the future. In Higher Education schools and vendors are partnering to look at the vast sources of raw data and transform the mess into meaningful insight on who will succeed….and who won’t in their quest for a degree. However, we need to be cautious with the predictions we make:
- In 1895 a Munich schoolmaster wrote in Albert Einstein's school report, "He will never amount to anything”
- For Charlotte Bronte a school report said that she "writes indifferently" and "knows nothing of grammar".
- On John Lennon an instructor wrote: "Hopeless. Rather a clown in class. He is just wasting other pupils' time. Certainly on the road to failure."
Today, we have more expertise and more science wrapped up in our predictions than ever before. However, like this weekend’s hurricane prediction more and more data does not guarantee that we know the future. This month the September/October EDUCAUSE Review, leading into October’s Annual EDUCAUSE Conference features articles on Predictive Analytics, Big Data Analysis in HigherEd and the Maturing Analytics Capabilities at Colleges and Universities.
The article focused on Predictive Analytics by Kevin Desouza and Kendra Smith concludes with this warning: “The unintended consequences of automating, depersonalizing, and behavioral exploration are real.”
As we learn more and more about the future, we need to make sure we continue to understand the past and keep in mind that after more than two thousand years predicting the weather all that data and all that information doesn’t always tell us where the rain will fall.
Dan Venedam has worked on the vendor side of Higher Education for most of the last 20 years with the last dozen focused on Analytics. His opinions are luckily only his own.
Right before I read your piece Dan, which I did enjoy by the way, I read an article about how Microsoft has a big data project where they are using IoT sensors to manage the contents of people's refrigerators. Basically a system to tell you that you're almost out of milk and your yogurt and peaches are about to go bad.
While it's true that predicative algorithms need to get better and better, there is also an equally important responsibility which needs to be taken on by the forecasting media, i.e. the for profit companies like Weather.com. They make money by more traffic to their website, so it's in their best interest to portray the dooms-day forecast as the most likely scenario. People don't go to for-profit forecasting websites to see sunny & 70. We as a nation need to be more diligent wrt ensuring that we present all options with equal emphasis. Granted, the likelihood of one prediction may only be 25-30%, even so, it warrants equal discussion and presentation to the public. I did hear a report by a local Baltimore meteorologist who said "yes, it could go either way, it'll either be beautiful, or stormy, we just don't know, so I would not change your plans". The DJ immediately discredited the report and tried to emphasize the dooms-day version.
Great point Dan!