Freaked Out? We Really Can Prepare for A.I.
OpenAI last week released its most powerful language model yet: GPT-4, which vastly outperforms its predecessor, GPT-3.5, on a variety of tasks.
GPT-4 can pass the bar exam in the 90th percentile, while the previous model struggled around in the 10th percentile. GPT-4 scored in the 88th percentile on the LSAT, up from GPT-3.5’s 40th percentile. And on the advanced sommelier theory test, GPT-4 performed better than 77 percent of test-takers. (It’s predecessor hovered around 46 percent.) These are stunning results — not just what the model can do, but the rapid pace of progress. And Open AI’s ChatGPT and other chat bots are just one example of what recent A.I. systems can achieve.
Kelsey Piper is a senior writer at Vox, where she’s been ahead of the curve covering advanced A.I., its world-changing possibilities, and the people creating it. Her work is informed by her deep knowledge of the handful of companies that arguably have the most influence over the future of A.I.
We discuss whether artificial intelligence has coherent “goals” — and whether that matters; whether the disasters ahead in A.I. will be small enough to learn from or “truly catastrophic”; the challenge of building “social technology” fast enough to withstand malicious uses of A.I.; whether we should focus on slowing down A.I. progress — and the specific oversight and regulation that could help us do it; why Piper is more optimistic this year that regulators can be “on the ball’ with A.I.; how competition between the U.S. and China shapes A.I. policy; and more.