Until now.
I’ve just watched a ‘60 Minutes’ television news program in which CBS reporter Scott Pelley took a look at Google’s new piece of artificial intelligence named Bard. Pelley typed Bard a request asking it to produce a story based on six words. The six words were:
“he shoes were a gift from my wife, but we never had a baby. They were sitting in the closet, collecting dust, and I knew they would never be used. So I decided to sell them.
“I’m glad I was able to help her and I’m glad to find a good home for the shoes.”
A shocked Pelley said:
“It created a deeply human tale with characters it invented. I am rarely speechless, I don’t know what to make of this.”
As an aside, the six words Pelley gave to Bard to create a story have a history. Back in the 1920s, author Ernest Hemingway is said to have bet some other writers $10 that he could write a novel in six words. So Hemingway wrote: For sale. Baby shoes. Never worn. And won the bet.
The fact that artificial intelligence could produce a real story from those six words is amazing, and alarming.
Some people are concerned that AI, while increasing productivity and efficiency, will eliminate thousands of human jobs. There also is concern that AI-produced fake news will create chaos in many fields, from law enforcement to politics.
Geoffrey Hinton, a retired Google executive who has been called the Godfather of Artificial Intelligence, worries that AI has the potential to one day take over from humanity.
“I think my main message is there’s enormous uncertainty about what’s going to happen next,” he said in an interview with Pelley. “These things (AIs) do understand. And because they understand, we need to think hard about what’s going to happen next. And we just don’t know.”
It’s certainly important that further development of AI not be left solely to huge tech companies like Google. Many different segments of society need to be involved to ensure the benefits of AI are promoted safely while potential harm is controlled by regulations, and laws that punish abusers.
Said Google CEO Sundar Pichai in an interview last spring:
“This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers and so on. . . . I think these are all things society needs to figure out as we move along. It’s not for a company to decide.”
Certainly AI is scary because even the experts don’t know its full capabilities, or where it is going next. Hinton expects that within five years AI models like ChatGPT may be able to reason better than humans.
So if in the next while you notice this column reads a bit differently – perhaps lacking its usual human flair and spark – you’ll know that I have been replaced by a computer.
No comments:
Post a Comment