New Lenses Can Give You Super Color Vision

Thanks to the architecture in our eyes, we see but a small subset of the hues that make up the visible spectrum.

We only have three kinds of cones, or color-sensitive cells, to make sense of what could be millions or even hundreds of millions of colors. We still do a pretty good job of it — normal human eyes can pick out about a million different colors, far more than we have ever come up with names for. Still, we could conceivably do better.

More cones would detect more combinations of colors and ever more subtle distinctions between shades. Some people, called tetrachromats, actually possess an extra cone and can see colors invisible to the rest of us. Now, for those of us not blessed with such a mutation, researchers at the University of Wisconsin-Madison have devised a pair of lenses that splits the color spectrum to turn us into artificial tetrachromats.

Splitting One Cone Into Two

Mikhail Kats, a professor in the department of electrical and computer engineering, and his graduate student, Brad Gundlach, focused on a specific type of cone in our eye that is responsible for seeing blues, or the high frequency end of the visible spectrum.

It works like this: We actually have six cones, not three, because each eye contains its own set. Normally, each eye’s cones pick up on the same wavelengths of colors. By selectively blocking out different parts of the spectrum in each eye with their lenses, the cones that normally worked together now send separate packets of information to our brains.

Kats says this effectively simulates an additional cone by giving each of our blue photoreceptors a different half of the spectrum. Each eye then sends a different signal to the brain when confronted with the color blue, and when it combines that information, new colors emerge. The technique could theoretically give us as many as six kinds of cones using various combinations of lenses. They published their work on the preprint server the arXiv.

Color Spectrum Opens Up

In tests with butterfly wings and mosaics of different blues, the researchers saw shades of blue that were previously lumped together into what’s called a metamer. Because our eyes can’t possibly pick out every single wavelength, we combine similar bands of the spectrum into packets. This essentially defines an upper limit for color resolution, because there are only so many ways to combine these color packets. It’s estimated that each cone can distinguish about 100 different shades of a color. With three cones, all of the possible combinations create about a million colors — add another cone and we can potentially see one hundred million colors.

The researchers hope to expand their experiment to the other two types of cones that respond to red and green light in order to explore the many hues we’re missing in the rest of the spectrum. Their lenses could potentially be used to detect camouflage, or to detect counterfeits, as well as at the grocery store to separate ripe fruit and vegetables from those that are too ripe.

No Digital Native

Oh, kids these days. When they want to know something they Google it. When they want to buy something they go to Amazon. When they want to date someone they open Tinder.

It’s almost like they’re from a different country, one where technology has bled into every aspect of life. These so-called “digital natives” are endowed with the ability to seamlessly interact with any device, app or interface, and have migrated many aspects of their lives to the Internet. This is all to the detriment of the “digital immigrants,” those born before roughly 1984, and who have been forced to adapt to unfamiliar and fast-changing technologies.

This line of thinking dates back to 2001, when educator Marc Prensky coined the term in an essay. Digital natives, he claimed, have a newfound faculty with technology, and can handle multiple streams of information because they are adept multitaskers. What’s more, according to Prensky, educators and businesses need to toss out tradition and adapt to appease this new, tech-savvy generation.

But “digital natives” don’t exist—at least according to new research—and it may be a fool’s errand to adapt traditional methods of learning or business to engage a generation steeped in technology.

Tale of the Digital Native

The true existence of digital natives has come under question in the years since, as multiple studies have shown that Millennials don’t necessarily use technology more often and are no better at using basic computer programs and functionalities than older generations. Multitasking has fared little better, as research shows that when asked to do two separate tasks at once, we take the same amount of time and make costly errors. Digital natives do, however, seem to have bought into the myth themselves, with nearly twice as many saying that they are digitally proficient as actually are.

“The answer is not how we can adapt it … we have to treat people as human, cognitive learners and stop considering one specific group to have special powers,” says Paul Kirschner, a professor of educational psychology at the Open University in the Netherlands.

Kirschner, together with his colleague Pedro de Bruyckere, recently authored a review paper on digital natives and multitasking in the journal Teaching and Teacher Education and argues for a shift in the way we think about our relationship to technology. We seem to assume, based on how easily the digital native myth propagated through society, that humans can meld perfectly with the devices and programs we create. As the majority of research on the matter suggests, however, that isn’t the case.

Doctors use the new Google Glass when they see the patient

You could be forgiven for assuming that Glass, Google's head-mounted augmented-reality device, had been effectively dead since 2015. But as Google’s sister company X, the Moonshot Factory, announced on Tuesday, the project has been pivoting to a business-to-business model over the past two years. The new, updated version of the device is known as Glass Enterprise Edition, and it’s been put to use at companies like Boeing, DHL—and in your physician’s office.

Going to the doctor today is “a pretty tragic experience,” says Ian Shakil, the CEO and co-founder of a company called Augmedix. Its platform enables physicians to wear Glass Enterprise Edition as they see patients, while remote medical scribes fill out the electronic medical records based on what they hear and see from the visit.

The doctor’s office experience is unpleasant, Shakil claims, thanks to all the time the physician spends looking at a screen and typing, as opposed to just focusing on interacting with the patient at hand. Augmedix’s message to doctors is: “Put on Glass, go have normal conversations with your patients." Meanwhile, the audio and video streamed from the Glass go to a trained medical scribe, who may be located in a place like California, India, or Bangladesh, and whose job it is to fill in the electronic health records.

The integrated display on Glass can be used to provide the doctor with information about the patient in real time as they perform the examination.

The system is thus a fusion of a high-tech streaming service with a tried-and-true human component on the other end. And while it might seem like AI and voice-recognition software might be well-positioned to do a job like this, Shakil says that what the scribes are doing isn’t transcribing the visits word for word, which would result in a block of text, but instead producing a “structured medical note” from the conversation.

Some people will likely find it creepy that their doctor is beaming audio and video to a remote assistant, especially if they’re in their skivvies. But the final decision about the system’s usage is determined by the patient. It’s not a mandatory part of receiving care.

Patients are informed before they see the doctor that the physician will be using Glass, and have the chance to allow it or not—but Shakil says 98 percent do consent. As for that streaming video, it can be switched off at the appropriate times, and when the video is on, a green light clearly indicates that for the patient. The doctor can switch to audio-only mode to continue the note-taking without video, or the system can be shut off completely.

Solving the Furious Old Mystery of the Rare 'Bright Night'

Solving the Furious Old Mystery of the Rare 'Bright Night'

On This time we wanna tell you about something that's was incredible and very amazing. So you must read carefully and don't forget to share this article to your friends. Check this out.

On rare occasions throughout history, the darkness of night fails to materialize. Even with the moon darkened, the sky fills with a diffuse glow that seems to filter out of the very air itself. Such “bright nights” have been recorded back to the days of Pliny the Elder around 132 B.C., although explanations for the phenomenon have been lacking.

Using a special interferometer and data from the 1990s, two Canadian researchers say that they can explain why the sky seems so much brighter on some nights. Ultraviolet radiation from the sun interacts with oxygen molecules in the atmosphere on a regular basis, occasionally splitting them into negatively charged oxygen ions. When these ions meet each other again and recombine, the reaction gives off energy in the form of visible light.

This reaction is called airglow, and it’s something that researchers have been measuring for well over a century with specialized instruments. It often appears in pictures from the International Space Station, manifesting as a thin green curve hovering above the surface of Earth. The faint gleam isn’t all that rare —the authors estimate that it’s occurring somewhere around the planet about seven percent of the time — but airglow is normally invisible to us down here on the surface, as it is too faint to see.

But on rare occasions airglow lights up the night sky due to the “stacking” of high-altitude atmospheric waves, which can multiply the intensity by a factor of 10. When the wave frequencies align, their amplitudes increase, and when up to four of them combine at a certain longitude, the night sky comes alive.

In a paper published in Geophysical Research Letters, the researchers estimate that this occurs roughly once a year for any given location, although the presence of a full moon or other light source can wash the glow out. When strong storms roil the atmosphere, it can make the bright nights more likely, a factor that may increase in frequency as weather patterns shift around the world.

For many of us, it won’t really make a difference though — the light pollution that emanates from our urban areas swallows up even the strongest airglow for miles after the city limits.