Date: 2020-12-10

I was thinking about the levels of automation for self-driving cars and that made me think about a neat way to interpret the “levels” of AI. The interpretation is not directly related to how AI concepts are being used in self-driving cars. Rather the interpretation is about the relationship between the driver and the car, and how it parallels the relationship between a researcher and software. So instead of having a driver and a car you have a researcher/engineer/whatever and some software that maintains an autonomous system.

At the most basic level you need constant intervention from the researcher to maintain the AI system. This parallels a driver required to keep their hands on the wheel for cars running rudimentary autonomous driving software. On the other end of the spectrum you have true AI where no intervention is required by the researcher and the AI system is a self-sustaining (and more importantly self-correcting) entity. To continue the analogy, this would be like a “steering wheel optional” situation.

This is a super unfinished thought though… If there’s one thing you take away from this post it’s… to stop people from saying triggering things like linear regression is AI 😏