- posted
2 years ago
OT: ATTN: Rod Speed: Alexa et al listen in on you.
- Vote on answer
- posted
2 years ago
If you go into the app, it can be good fun to see what you said and what it thought you said in the history. Hours of laughs. a lady play radio 2 A lady heard play radio tooth, sorry I cannot find that station here is rap radio from Belfast. I invented the names but it is the gist of how it can kind of be counter intuitive? Brian
- Vote on answer
- posted
2 years ago
The main gripe I have with inline ads on the web is that a wonderfully blind accessible web site churns out adverts which can completely trash all the care and design the site builder has taken by merely putting scrolling graphics, or test as pictures, or some kind of dodgy formatting in the page. I thus use add blockers and any site that does not like it and wants me to pay them can go to hell, they should be looking first and foremost the person whatever their disability, otherwise what is the point if folk are going to click away? Its not a new problem, Remember all those download sites that had pop ups saying download me first etc? bringing back text and ftp sites I say. Brian
- Vote on answer
- posted
2 years ago
Ads annoy me fully sighted, it must be many times worse for you. I can't even stand to see a single ad on a page. I can't watch or hear one single TV or radio ad without muting or fast forwarding. There are just so many of them it's become beyond a joke.
- Vote on answer
- posted
2 years ago
When I worked at a university, we did some studies on voice recognition. The main thing I found was more processing power makes a lot of difference. One of the systems we had, you could see it thinking about it, getting a better match as it went along. A slow processor can't keep up with a fast talker. I don't know how good a processor is in Alexas.
- Vote on answer
- posted
2 years ago
Surely Alexa's processing is done centrally by the Amazon server.
- Vote on answer
- posted
2 years ago
That's a rather silly way to do it. They'd need a huge system, and there would be problems with latency. You don't need that much CPU to process voice, it should be inside the device.
- Vote on answer
- posted
2 years ago
It is.
The device has beamforming in it with the microphone array to pick up the direction of the loudest sound.
It ten uses local wakeword recognition for the Alexa keyword.
Only then does it start to stream audio to the server - starting just _before_ the wakeword incidentally.
It's far too expensive to process audio in the servers all the time.
And BTW the mute button is hardware linked to the light. No software hack can make it look deaf but not be deaf.
Andy
- Vote on answer
- posted
2 years ago
This is stupid for two reasons. Firstly it's a privacy breach. Secondly why do all that processing in one place? You don't need much power to do voice recognition with today's processors, smartphones do it ffs.