Changes

Jump to: navigation, search

WeeklyUpdates/EmergingTechnology

35 bytes added, 16:07, 18 March 2019
Added link for this week's meeting
* '''Iodide''' -- In the last 10 years, there has been an explosion of interest in “scientific computing” and “data science”, triggering a renaissance in programming languages, tools, and techniques that help scientists and researchers explore and understand data and scientific concepts, and to communicate their findings. But to date, very few tools have focused on helping scientists gain unfiltered access to the full communication potential of modern web browsers. That’s all changing thanks to Iodide, an experimental tool meant to help scientists write beautiful interactive documents that can contain graphics, 3D plots, VR, and other interactive data displays all using standard web technologies and all within an iterative workflow that will be familiar to many scientists. You really have to see what Iodide can do to fully appreciate it, which is why Brendan Colloran’s [https://hacks.mozilla.org/2019/03/iodide-an-experimental-tool-for-scientific-communicatiodide-for-scientific-communication-exploration-on-the-web/ Hacks blog post] is a must read.
* '''Faster Behind The Scenes''' -- In order for you to enjoy watching high-quality, royalty-free video on the web someone, somewhere has to encode it. AV1 is still very new, though, and work on the initial release focused on making playback fast. Over the last six months, though, some great work has happened to make encoding much faster -- about twelve times faster according to a recent performance test [https://www.streamingmedia.com/Articles/ReadArticle.aspx?ArticleID=130284 published by StreamingMedia.com]. That gets AV1 into the range where it’s worthwhile for streaming sites with large audiences, like many of the companies behind the Alliance for Open Media that produced AV1, so expect to see a lot more online content using in the near term. Meanwhile, teams continue to work on making the encoder faster yet, as well as getting playback support in ever more software and even in hardware.
* '''How Does It Sound?''' -- In experimenting with new ways of generating realistic-sounding human-like voice our DeepSpeech team has implemented a specialized recurring neural network technique called “WaveRNN”“[https://arxiv.org/abs/1802.08435 WaveRNN]”, and produced a sample you can listen to to see what you think. Check it out at [https://soundcloud.com/kelly-jay-davis/wavernn-tts-integration-results https://soundcloud.com/kelly-jay-davis/wavernn-tts-integration-results], and keep in mind the audio you’re listening to was produced by our text-to-speech engine based entirely on a string of input text.
== March 11th, 2019 ==
407
edits

Navigation menu