Spying watch [Guardian]
A “smart” watch that allows children to be easily contacted and located by their parents could prove a great device for some. A watch that allows any stranger with hacking skills to track children certainly is not. The German telecoms regulator describes such watches, equipped with GPS, a microphone and speaker, as “spy devices”. A few days ago, the European Commission launched a recall alert concerning a specific model of watch for children that could be easily hacked. At a time when European policymakers give so much focus on ensuring privacy and data protection, this is one more illustration of the importance of cybersecurity.
Robo-journos taking over! Not really… [New York Times]
While many assume automation will most negatively affect blue collar workers, algorithms are being increasingly used in sectors like law and journalism. Some large newspapers like the Washington Post, Bloomberg, and the AP are using computer generated articles in their daily news, especially financial analysis and local election news. The news outlets see it as a good thing: it gives ‘real’ journalists more time to do investigative journalism and dig into the more complex writing. Should we believe that AI can be used to bring give breathing space to cover the real stories that journalists barely have time to produce right now?
Tech causing a rift in the workforce [New York Times]
More on automation and the (false?) assumption that AI is primarily impacting blue collar workers. Economists are now reassessing their view that AI progress would lift general productivity. They’re worried that a small group of highly educated professionals are making big wages at big digital or digitising corporations while a mass of less educated workers are stuck in wage-stagnant sectors like hospitality or healthcare. The surprising thing is that jobs in those sectors – which are hard to automate – are growing. Shockingly to many – job loss is being seen in highly productive, and formerly lucrative, sectors like finance, information services, and trade.
Think before you retweet [TechCrunch]
Instead of tracking social media content that could be fake – UK startup Fabula AI is tracking how it spreads. They’ve noted that fake news spreads throughout social media platforms in a very distinct pattern – allowing the geometric deep learning AI to track its path to determine if it’s malicious disinformation or unintentional misinformation. Still in the development phase, Fabula’s algorithms have identified 93% of fake news within hours of its dissemination. And while a 7% error rate wouldn’t be great in the real world, it’s better than most other AI systems used to track fake news.
In case you haven’t had enough…
Tech platforms called to support public interest research into mental health impacts [TechCrunch]
Has Facebook been good for the world? [Vox]
When governments turn to AI: Algorithms, trade-offs, and trust [McKinsey&Company]
Google says it wants rules for the use of AI – kinda, sorta [Wired]
Where will drug overdoses hit next? Twitter may offer clues [WSJ]