Machine Learning Is Indistinguishable from Magic

Machine Learning Is Indistinguishable from Magic

… and that’s a big design problem

[Note: this was something I wrote in 2016 and I'm throwing it on LinkedIn to see what the writing experience is like]

Machine Learning and Artificial Intelligence have gotten a lot of press over the last few decades, and for good reason. In 1997, we witnessed a robot beating the best chess master in the world. It was a landmark moment, because it foreshadowed a day where machines could perhaps outthink humans.

But I’ve been seeing a worrying trend in tech circles over the last few years. Machine Learning, or “ML,” has turned into a shorthand for “magic.” For example, you might say “We could use ML to make sure your phone automatically gets set to silent when you walk into a movie theatre.” Or “We could use ML to make sure you never see abusive comments on the internet.” Sounds great, like everything that comes from a magic wand.

But there’s a very real, very tangible, very tactical side to all this magic. And you have to design for it or you’re going to make a really bad experience. Others have written about this before, but here’s my back of the envelope calculation boiling down what I’ve learned over the years about how to design with ML in mind:

  1. Don’t draw attention if you don’t need to. Just do it.
  2. Obvious always wins.
  3. Explain the implications of what you are doing.
  4. Provide opportunities to “teach” the system.




1. Don’t Draw Attention

When Google auto-fills search results, they don’t need to say “brought to you by Machine Learning.” They just auto-fill search results.

When your car notices that you’re skidding and goes into a specially designed skid algorithm, it doesn’t say “you are no longer controlling the brakes, pesky human.” It lets you believe you are still braking while it silently gets the job done.

When your phone knows you always check your email at 7:02 in the morning, then tap Twitter, then tap Facebook, it can pre-load all those requests one minute earlier (as long as you’re connected to wifi and battery power). It doesn’t need to provide a setting for it, and it doesn’t need to explain it’s doing it.

2. Obvious Always Wins

People are tuned to understand when they’re losing something because they really don’t like that feeling. It’s called Loss Aversion and it comes up a lot in product design. And it’s a big issue when ML gets a bit too aggressive with the decisions it’s making.

Have you ever missed an important message on Facebook because it buried something you cared about? It’s annoying, right? It makes you lose trust in the algorithm. Even if the algorithm is right 99% of the time, our brains are designed to feel anxiety about that 1% that the robot is getting wrong.

So do I think Facebook should go to a reverse chronological stream so people never miss anything? Well, no. I think they made the right call this time. But when it comes to ML, a little bit goes a long way. But use too much, and make too many assumptions, and you’ll be wrong a lot. And even a 1% failure rate can cause people to lose trust.

Dial down the wow factor and dial up the reliability. People should be saying “Of course it works that way,” not “Wow.” Otherwise the only people using your products will be enthusiasts, not mainstream customers, and the product will probably fail.

3. Explain the Implications

Email spam filtering is a fitting example. In Gmail and other products you can say “Never show me spam like this again.” And then it will show you all the similar messages that would be affected by your new filter. This way you can see exactly what assumptions the product will be trying to make on your behalf, which increases the quality and the trust level.

All Machine Learning that requires messaging falls into this category. If you’ve decided you need to tell the user about some gee-whiz ML feature, then you’ve signed up for clearly explaining the implications of their actions. And you also need to…

4. Provide Opportunities to Teach the System

Spam filtering, again, provides a great precedent. Sometimes the robot gets a bit over-zealous with its filter and suddenly your birthday greeting from Aunt Gretta is banished to the spam folder. But when you find it, no problem, you can tell the system “never mark this sender as spam.”

This is a great pattern for a few reasons. You are able to see the implications of the incorrect decision, you trust that the system is never going to delete your messages entirely, and you are able to teach the code to do better next time. This makes the system “smarter” while increasing your trust. Win-win.



If you’re working on products as part of a team, and someone refers to Machine Learning, here’s what you do. First, you should appreciate that they’re thinking boldly. That’s a good start. But then you should get to a whiteboard with this person as soon as possible to hash out the specifics. The seeds of success or failure are sown in those first discussions. What is the problem we’re trying to solve? Where do we want to take the user?

Do you ever wonder why Steve Jobs was so unsuccessful early in his career and so successful later on? This lesson. The moment he learned it and internalized it, Apple flourished. Here’s a quote from 1997, before he returned as CEO.

“One of the things I’ve always found is that — you’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try to sell it. And I’ve made this mistake probably more than anybody else in this room. And I’ve got the scar tissue to prove it. And I know that it’s the case.
And as we have tried to come up with a strategy, and a vision for Apple, um, it started with … what incredible benefits can we give to the customer? Where can we take the customer? Not starting with — let’s sit down with the engineers and figure out what awesome technology we have, and then how are we going to market that. And I think that’s the right path to take.”

And that’s where we are with ML. As powerful as it is, it can’t change the reality that the human mind hates black boxes, hates it when the robot makes the wrong decision, hates it when the machine won’t allow the user to undo incorrect assumptions, and hates over-hyped and underwhelming products. In fact, ML’s incredible power is its ability to guess and leave the user out of the loop, which is why it could make even worse software experiences than we’ve seen up until now. A sobering thought.

So it’s our job to keep an eye on that, even as we appreciate what magic ML can perform in the right hands. If we’re careful, listen well, and work hard, those hands could be ours.

"Don't draw attention" just get the sh*t done. I like that. 

Like
Reply

To view or add a comment, sign in

More articles by Jon Bell

  • A comic about a launch I am proud of

    This release was a long time coming, and we'd love to get your support over on product hunt! Now if you'll excuse me, I…

    5 Comments
  • My Interview for UX Fest 2021

    UX Fest is happening this June and has a great lineup of speakers. If you’re interested in UX, I recommend checking it…

  • We're Building a Non-Profit!

    Disinformation on the internet is a big problem, but I didn't realise how big until I made it my day job. I joined…

    1 Comment
  • “Put It Four Steps Away”

    I left LinkedIn about five years ago. It turns out it was a giant mistake.

    1 Comment

Others also viewed

Explore content categories