AT&T Was Working on a Siri Precursor in the 1960s


Speech synthesis software has been on the market for quite a few years now, with Siri being just the latest and flashiest example.But have you ever wondered how old the concept of computers talking to us really is?

I’m not just talking about SF fakery like Star Trek or 2001, I’m talking about real research into figuring out how to use computers to talk to us. While there were early mechanical attempts which date back to the 1700s, modern development of speech synthesis tehc started in the 1930s at AT&T’s Bell Labs. AT&T was a telephone monopoly at the time, and the invested a lot of money in basic research (he first transistors was developed there and so were a couple early computers). And since AT&T was a telephone company, they were also deeply interested in understanding how to recreate speech.

Here’s more from Youtube:

Speech synthesis at Bell Labs dates back to the 1930s and Homer Dudley’s Voder, which was exhibited and publicly demonstrated at the 1939 World’s Fair. Because understanding all aspects of the conversion of speech to electrical signal was a core interest of the Bell System, speech synthesis research continued at the company in the ensuing decades, entering the computer era in the 1960s, with articulatory speech vocal tract models created by Paul Mermelstein, Cecil Coker, John L. Kelly Jr., and Louis Gerstman, among others. Text-to-speech programs were researched from the 1960s all the way to the present day.

The video above dates to 1967, and it’s the work of Dr. Cecil H. Coker and others. While it might not sound like much, the demo is actually far more ambitious than Siri.

Siri just plays back recorded words; the video above shows that scientists were studying the mechanics of how you use your mouth and throat to make the sounds. That is far more complicated and difficult.