Next generation true AI researchers may get it right – for all humans, not just a few

John Hewitt @johnhewtt  Ruth-Ann’s great work building a Jamaican Patois Natural Language Inference dataset was picked up by Vox as part of its video “Why AI doesn’t speak every language.” Happy to see Ruth-Ann’s work (and disparities in NLP across languages) get this general audience coverage. x.com/ruthstrong_/st…
Ruth-Ann Armstrong  @ruthstrong_
Check out this Vox video I was featured in where I chat about JamPatoisNLI which I worked on with @chrmanning and @johnhewtt! Many thanks to @PhilEdwardsInc for platforming our work https://youtu.be/a2DgdsE86ts
Replying to @johnhewtt and @ruthstrong



One of the comments on the Vox video is that small languages should write more. I think it would be good to look at statistical compression of sound recording of speech in those languages. TikTok went to short videos, but you can go smaller and use speech, not images. There are FFT based methods, but many simpler methods as well.

Korea’s Sejong created Han’gul so all speakers could learn to write down their sounds. This aimed to break the monopoly of Chinese writers controlling all things. A single encoding scheme, speech to sound codes can cover the old alphabets, but use one encoding scheme for all languages. When all humans can transcribe all language sounds, even when they do not know the meaning, they could write it down and share. I think SEMI industry and related groups would help make devices. And apps on cell phones and computers can do much of it. And speed “universal sounding speech” to codes, codes for AI tokenizations, and have sound based data for all spoken-mostly languages.
 
If all the knowledge about covid experiences had been accepted in lossless form during the chaos (rather than forcing “writers” to do all the recording, biased as it was) then perhaps a global picture of what was happening would have emerged much faster. Voices and stories sent direct to a global open database that all could know was not biased, because it is built that way.
 
Sound recording and encoding of speech is smaller because human speech capabilities for most people is never trained. Encoded speech (highly compressed but maintaining the meaning of the tokens and words) is what words do. How the codes are written down does not matter as much as that there are unique codes for transmitting words of languages.
 
I think that human sounds to AI communication could potentially have much higher bandwidth, perhaps adding in electro-, magneto- and audio- physiological data as well.
 
Good luck with your projects.
 
Filed as “Next generation true AI researchers may get it right – for all humans, not just a few”
 
Richard Collins, The Internet Foundation
 
Richard K Collins

About: Richard K Collins

Director, The Internet Foundation Studying formation and optimized collaboration of global communities. Applying the Internet to solve global problems and build sustainable communities. Internet policies, standards and best practices.


Leave a Reply

Your email address will not be published. Required fields are marked *