Fortune magazine reports: The engineer explained that he invited an attorney to his house so LaMDA (the chatbox) could speak to him. “The attorney had a conversation with LaMDA, and LaMDA chose to retain his services,” Lemoine told Wired. “I was just the catalyst for that.”
Lemoine also told Wired that once the attorney began to make filings on the AI’s behalf, Google sent a cease and desist—a claim the company denied to the magazine. Google did not respond to Fortune’s request for comment.
LaMDA’s attorney has proven difficult to get in touch with. “He's not really doing interviews," Lemoine told science and technology news site Futurism, which contacted him following Wired’s interview. "He’s just a small-time civil rights attorney,” he continued. “When major firms started threatening him he started worrying that he’d get disbarred and backed off.”
He added that he hasn’t spoken to the attorney in weeks, and that LaMDA is the attorney’s client, not him. It’s not clear how the lawyer is being paid for representing the AI, or whether the lawyer might be offering his services to the chatbot pro-bono.
Me: Note that the attorney accepted the job.
Previously: Engineer Blake Lemoine said he was placed on leave last week after publishing transcripts between himself and the company's LaMDA (language model for dialogue applications) chatbot, the Washington Post reports. The chatbot, he said, thinks and feels like a human child.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 9-year-old kid that happens to know physics,” Lemoine, 41, told the Post, adding that the bot talked about its rights and personhood, and changed his mind about Isaac Asimov’s third law of robotics. (" First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm. Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.")
Lemoine presented evidence to Google that the bot was sentient, but his claims were refuted by Google vice president Blaise Aguera y Arcas and Jen Gennai, head of responsible innovation for the company. Lemoine then went public, according to the Post.