You no longer have to buy a new car or stereo to use Android Auto



During Google’s annual developers conference this past spring, the company teased a version of its Android Auto software that would run entirely on an Android phone and not require a new car or aftermarket dashboard display. Now the company is starting to roll this software out to Android phone owners.The new Android Auto app, which is an updated version of the existing app, offers a completely streamlined, easy-to-navigate phone interface for when you’re driving in a car. It also limits the amount of notification alerts you see, and includes the options to have your text messages read aloud to you and to respond to them using voice.

The new app shows just four icons at the bottom of the screen for Maps, phone calls, audio listening, and an enlarged home button — which maps back to Android Auto when running, instead of to the main home screen. At the top of the interface there’s a menu tap, and a microphone icon. Even without actively tapping the icons at the bottom, Android Auto will automatically show a series of cards once you’ve launched the app: right now as I write (not from a car), the main screen of Android Auto is showing music controls for Spotify, the weather in my current location, a missed phone call, and directions to a place I recently looked up in Google Maps.

Google says that any of the apps that have been previously supported in Android Auto — and there are more than 500 of them — will work within this application interface, too. That means popular apps like WhatsApp, Spotify, Pandora, and Pocket Casts will all work in there.




The new Android Auto is free to download, and will work on any Android device running Lollipop (OS version 5.0) or newer. It will be available in 30 countries.

One of the downsides of the Android Auto mobile app is that its "always-on" mode drains your smartphone’s battery life, although Mickey Kataria, director of product management for Maps and Android Auto, insists that using Android Auto would only drain battery life a "tiny bit more" than, say, running Maps in standard mode for an extended period of time. (The app also suggests that you tether your smartphone to a power source during the initial setup process.)

And while you’ll eventually be able to say "OK Google" to trigger voice control within Android Auto, that’s not something that is available at app launch. It’s an odd exclusion from an app that is, essentially, designed to have you tap and swipe less while you’re driving, but Google has indicated that "OK Google" will be rolled out in the coming weeks.

YouTube adds support for HDR video



YouTube today announced that its platform now supports high dynamic range, or HDR, video. HDR essentially allows screens with the right hardware specifications to display a more accurate and realistic range of whites and blacks, as well as a wider range of colors. Alongside 4K resolution, HDR technology is the other big selling point of new TVs and one of the key benefits touted by Microsoft and Sony for the latest versions of their respective gaming consoles. However, there’s a critical lack of HDR content out there to watch, and a hodgepodge of different hardware and software requirements that make it difficult to know where and how to access those videos.

To view any of this content right now, you’ll need a piece of external hardware. That includesGoogle’s new Chromecast Ultra, as well as a HDR Blu-ray player or the Xbox One S. However, YouTube says Samsung’s 2016 4K TVs will support native HDR video playback from the device’s built-in app some time in the future, meaning you won’t need a separate box hooked upSo now YouTube is trying to urge both creators and viewers to start dabbling in HDR video in a central location, and its best way of doing that is by pushing the world’s largest video platform to adopt the format. YouTube has done the same in the past for 4K content, as well as for 360-degree video and live-streaming. The goal is to keep YouTube at the forefront of streaming and display technology so that it stays competitive against other video options like Netflix and Amazon. YouTube says it’s working with top channels and creators to fill out its HDR catalog in the coming months.

The Klok-01 watch turns heads and dials



It’s hard to make an original wristwatch these days. But hard doesn’t have to mean impossible, as evidenced by the Swiss-made Klokers Klok-01 that I’ve had the pleasure of testing over the past few weeks. This watch, funded on Kickstarter and costing a hefty €399 ($445) today, stands out with its eye-catching mechanism for telling the time. Instead of rotating hands pointing at the time on a circular dial, the Klok-01 inverts the whole process and rotates the dials around a fixed center point.

This is not a smartwatch, and I’m glad for it. No Bluetooth connectivity, no notification mirroring. It’s a device for telling the time, but it’s also a social instrument, as I quickly discovered during a weeklong trip to New York where everyone enquired about the Klok’s provenance and functionality. As such, it’s rather a good entry point to the whole culture that surrounds watches. Those among us who collect watches and spend large sums of money on them do so, firstly, for the knowledge of having a precise, caringly crafted timepiece, but also for the sheer glory of it.




So here’s how the Klok-01 works: there are three dials, each of them rotating with the time. Hours are on the outermost circle, minutes are in the middle, and seconds race along closest to the center. One line straight down the middle tells the time by intersecting each of the dials. It has an integrated magnifying lens, which looks like it protrudes from the watch’s domed, transparent-polymer cover, but actually sits underneath it. Is this the most efficient way of telling the time? Of course not. But it sure is an enchanting way to do it.

I think a big part of the Klok-01’s aesthetic appeal stems from the fact that its visual gimmick is also a functional thing. Those rotating dials feel alive with the motion of time, lending the whole watch a throwback feel that evokes overly complex inventions of human technological past. Who doesn’t like the charming, fairy tale images of overelaborate gizmos that make funny noises? The Klok watch only makes the steady ticking noise of a quartz movement, but you get the idea. It’s a tiny bit magical.

I don’t often like gadgets that are designed primarily to look great rather than work great. And my initial impression of the Klok-01 was not a hugely positive one. This watch has no backlight or illumination, so seeing it in the dark is a no-go. I was also disappointed that I could never, even after hours of tinkering, line up the three dials so that they corresponded perfectly to one another — the minutes would always be a little out of sync with the hour dial. But those concerns mostly faded away once I started wearing it.



The default leather strap that comes with the Klok-01 is excellent. It’s soft and supple, adapting to the contours of my wrist instead of pressing against them. Klokers offers a variety of other strap colors and materials (they all fit just as nicely), and the company has invented an effortless detachment mechanism. You just press the red button at the Klok’s lower left, and it slips on and off any compatible strap with ease. It can then just sit by itself as a standalone timepiece, and the watch-less straps are also pretty enough to be worn as bracelets by themselves.

The subtle advantage that the Klok-01 has over most other watches is its space efficiency. It has no lugs and it has practically no bezel, so all I have on my wrist while wearing it is a big, pretty display of time in motion, framed by two thin bands of leather. With a 44mm diameter, this is intended primarily to serve as a men’s watch, but I find its lightness, high degree of comfort, and unisex styling make it more versatile than the typical dress watch of this size.


New Zika treatment that protects pregnant mice could help us protect pregnant women



A new treatment that protects the babies of pregnant mice infected with Zika could help us create a similar therapy for humans.The mosquito-borne Zika virus has “spread explosively” through the Americas, leading tohealth and travel advisories in many countries. The virus can cause a severe neurological disorder called Guillain-BarrĂ© syndrome. It is especially dangerous for pregnant women whose children could be born with very small heads, a condition called microcephaly. The World Health Organization has declared Zika a global emergency.

There is currently no Zika vaccine; in the US, the FDA is trying to fight the disease usinggenetically modified mosquitoes. But a study published today in Nature suggests another avenue of research. Scientists led by James Crowe at Vanderbilt University took proteins that fight viruses, called antibodies, from the white blood cells of humans who had been infected by Zika. After testing these antibodies, they decided to focus on a specific strain called ZIKV-117 because it seemed the most effective at fighting the virus.

Next, the team put ZIKV-117 in pregnant mice. Some pregnant mice received it before they were infected with Zika, some after. In both cases, the antibody seemed to fight the Zika virus; additionally, the mice fetuses ended up bigger than without the antibodies, and the placenta was not as damaged.Pregnancy in mice is very different from pregnancy in humans, so this work is limited. A lot more research is needed, but this treatment still gives us some insight into potential ways to help pregnant women infected with the virus.

Can deep learning help solve lip reading?



Lip reading is a tricky business. Test results vary, but on average, most people recognize just one in 10 words when watching someone’s lips, and the accuracy of self-proclaimed experts tends to vary — there are certainly no lip-reading savants. Now, though, some researchers claim that AI techniques like deep learning could help solve this problem. After all, AI methods that focus on crunching large amounts of data to find common patterns have helped improve audio speech recognition to near-human levels of accuracy, so why can’t the same be done for lip reading?

The researchers from the University of Oxford’s AI lab have made a promising — if crucially limited — contribution to the field, creating a new lip-reading program using deep learning. Their software, dubbed LipNet, was able to outperform experienced lip readers to a significant degree, achieving 93.4 percent accuracy in certain tests, compared to 52.3 percent accuracy from human lip readers. And even in its current, early stages, the software is extremely fast — processing silent video into text transcripts in nearly real time.

However, before we get lost in bad dreams of AI-powered surveillance states and HALreading lips in 2001: A Space Odyssey, the research from Oxford has some serious limitations. For a start, the system was trained and tested on a research dataset known asGRID. This is a collection of tens of thousands of short videos of 34 volunteers reading nonsense sentences, as well as captions. Each clip is just three seconds long, and each sentence follows the pattern: command, color, preposition, letter, digit, adverb. Sentences include, for example, "set blue by A four please," and "place red at C zero again." Even the words within these patterns are limited — there are just four different commands and colors used. This has led some researchers in the field to suggest that the paper's findings have been overblown, especially after one viral tweet linking to the researchers’ video (below) made the sensationalist claim that the work meant there would be "no more secrets."





This is certainly not the case. Speaking to The Verge, two of the researchers behind the paper, Yannis Assael and Brendan Shillingford, readily admitted they were working with "restricted vocabulary and grammar," but said this was due to limitations in available data. "The dataset is small but it’s a good indication we could perform just as well with a much bigger dataset," says Assael.


REALLY, THIS WON'T HELP WITH SURVEILLANCE AT ALL

Both Assael and Shillingford are also keen to stress that their work has no application in the world of surveillance, simply because lip reading requires you to see the subject’s tongue — meaning that the video has to be straight on and well-lit to get a good result. "It’s technically impossible or at least very, very difficult," to use any lip-reading software for surveillance says Assael, adding that frame rate is also a factor; and something that is usually neglected with CCTV. He says: "And if you do have frontal video of someone taken with a very good camera, then you probably have a directional microphone [pointed at them] as well!" (On the subject of surveillance, Assael notes that although one of the paper's supervisors also works with Google's AI division DeepMind, Google itself had no involvement with LipNet's development.)


Instead, the two researchers think that lip-reading AI could help people with hearing disabilities, especially in noisy environments where it’s difficult for computers to isolate speech. For example, someone wearing a camera built into a pair of glasses could get clear, frontal footage of someone they're talking to at a party, and an ancestor of LipNet could then transcribe the conversation in real time, feeding it into their ear. "Anywhere you have speech recognition and a camera, we can improve that," says Assael. He also mentions silent dictation to Siri or Google Assistant as a possible use-case. In the future, then, perhaps those of us who don’t like speaking to our computers, can’t just have them read our lips instead.

Why NASA’s Juno mission could last a lot longer than it was supposed to


NASA is preparing for Juno to stick around Jupiter a lot longer than it had originally planned. The probe — which has been orbiting the gas giant since July — is going to stay in its 53-day orbit around the planet for a while, with no definitive plans at the moment to put the vehicle in a shorter orbit. Originally, NASA had hoped to have Juno in a two-week orbit by now, but ongoing engine troubles are delaying that move.

IT’S NOT EXACTLY BAD NEWS

If Juno never goes into its shorter orbit, it’s not exactly bad news. NASA says it won’t diminish the amount of science Juno can do at Jupiter. The biggest difference is that the spacecraft will be swinging by the planet at a much slower rate, so the mission could conceivably last beyond 2019, instead of its previously scheduled end of February 2018.


The trouble for Juno started last month, right before the spacecraft was about to do its second swing by Jupiter. The vehicle doesn’t orbit the planet in a circle but takes a highly elliptical path in order to avoid as much of the radiation-filled environment around Jupiter as possible. Because of this, Juno gets super close to the surface of Jupiter for just a few hours each orbit. These close passes are known as Perijove passes, and they’re the times when Juno can gather the most data.

The most recent Perijove pass occurred on October 19th, but the mission team didn’t intend to do any science on that one. Instead, the plan was to ignite Juno’s main engine, putting the vehicle in the shorter 14-day orbit. The engine burn can only be done during a Perijove pass and none of the science instruments can be on when it happens. Leading up to the pass, NASA engineers found that a few engine valves were taking longer to open than they were supposed to. "That is something that is significant because it can affect how the engine operates," Rick Nybakken, Juno project manager at NASA’s Jet Propulsion Laboratory, tellsThe Verge.






NASA decided to scrap the engine burn for the October 19th pass and take science measurements of Jupiter instead. But then a software glitch put Juno into safe mode. This mode turns off all of the vehicle’s instruments and prompts the probe to turn toward the Sun while it awaits instructions from Earth. NASA was able to resolve the glitch and bring Juno out of safe mode on October 24th, but at that point the Perijove pass was completed and no science data was gathered.

Now, NASA is focusing on the engine troubles. The mission team won’t be performing a burn on Juno’s upcoming Perijove pass, scheduled for December 11th, and they’re preparing for the possibility of never igniting the spacecraft’s engine again. "That’s the thing we’re looking at," says Nybakken. "We’re not going to do it if we can’t do it safely. And so we’re looking at different ways we can do the burn. Right now, it’s too early to say which way it’s going to go."

THE MISSION COULD LAST BEYOND 2019

That means Juno may not de-orbit in February 2018 like NASA had expected. Instead, the mission could last beyond 2019, according to Nybakken. That’s because the spacecraft will be exposed to the worst parts of Jupiter’s radiation less frequently. Eventually, all those charged particles around the planet will slowly damage Juno enough that it can’t function anymore. But Juno only receives the bulk of the planet’s radiation when it does its flybys of Jupiter. So on a 53-day orbit, the vehicle gets pelted with the worst radioactive doses at a much slower rate. "Radiation accumulation is a function of the number of orbits and not a function of time," says Nybakken. "So it’s really hard to gauge impacts of radiation perspective other than to keep in mind, we accumulate on a full orbit basis not on a time basis."

The only real problem? An eclipse in mid-2019 that would put Juno in Jupiter’s shadow for six to 10 hours. During that time, the spacecraft would not get any sunlight on its solar panels and its temperature would drop significantly. Juno has gone through periods of no sunlight before, but never for such an extended period of time. So NASA’s navigators are trying to figure out a way to change Juno’s orbit to avoid the eclipse, in case the probe is still operating by then. "That looks like a significant obstacle that we’d have to overcome," says Nybakken, "but we have very creative navigators."

Soylent blames algal flour for consumer complaints



After halting production of Soylent Powder and Soylent Bars in October due to consumercomplaints of nausea, vomiting, and diarrhea, Soylent thinks it has finally found the culprit: algal flour.

Since its inception in 2013, the Los Angeles-based meal-replacement startup has gained a strong following among Silicon Valley circles. However, just as the company was basking in its early success with the launch of its new coffee drink, reports surfaced about consumers who experienced unpleasant side effects, Bloomberg reports. The company has since been conducting tests to determine what could be making its consumers sick.

ALGAL FLOUR WILL BE REMOVED FROM FUTURE POWDERS AND BARS
Soylent now plans to remove algal flour from future powders and bars when it releases its new formulation early next year. However, Soylent’s algal flour supplier, TerraVia Holdings Inc., said Soylent’s products contain several irritants — like soy protein isolate and glycerin — that can potentially lead to discomfort experienced by consumers. “Our algal flour has been used in more than 20 million servings of products, and we are aware of very few adverse reactions. In no cases was algal flour identified as the cause.”

In addition to its powders and bars, algae can also be found in Soylent’s premade drink as well as its coffee-replacement Coffiest. However, the ingredients are in the form of algal oil, not algal flour. According to Bloomberg, Soylent hasn’t received any complaints with regards to its premade drink, and although there are talks of consumers being sick from drinking Coffiest, Soylent has yet to officially acknowledge this and has not put a pause on production.