New Lenses Can Give You Super Color Vision

Thanks to the architecture in our eyes, we see but a small subset of the hues that make up the visible spectrum.

We only have three kinds of cones, or color-sensitive cells, to make sense of what could be millions or even hundreds of millions of colors. We still do a pretty good job of it — normal human eyes can pick out about a million different colors, far more than we have ever come up with names for. Still, we could conceivably do better.





More cones would detect more combinations of colors and ever more subtle distinctions between shades. Some people, called tetrachromats, actually possess an extra cone and can see colors invisible to the rest of us. Now, for those of us not blessed with such a mutation, researchers at the University of Wisconsin-Madison have devised a pair of lenses that splits the color spectrum to turn us into artificial tetrachromats.

Splitting One Cone Into Two

Mikhail Kats, a professor in the department of electrical and computer engineering, and his graduate student, Brad Gundlach, focused on a specific type of cone in our eye that is responsible for seeing blues, or the high frequency end of the visible spectrum.

It works like this: We actually have six cones, not three, because each eye contains its own set. Normally, each eye’s cones pick up on the same wavelengths of colors. By selectively blocking out different parts of the spectrum in each eye with their lenses, the cones that normally worked together now send separate packets of information to our brains.

Kats says this effectively simulates an additional cone by giving each of our blue photoreceptors a different half of the spectrum. Each eye then sends a different signal to the brain when confronted with the color blue, and when it combines that information, new colors emerge. The technique could theoretically give us as many as six kinds of cones using various combinations of lenses. They published their work on the preprint server the arXiv.

Color Spectrum Opens Up

In tests with butterfly wings and mosaics of different blues, the researchers saw shades of blue that were previously lumped together into what’s called a metamer. Because our eyes can’t possibly pick out every single wavelength, we combine similar bands of the spectrum into packets. This essentially defines an upper limit for color resolution, because there are only so many ways to combine these color packets. It’s estimated that each cone can distinguish about 100 different shades of a color. With three cones, all of the possible combinations create about a million colors — add another cone and we can potentially see one hundred million colors.

The researchers hope to expand their experiment to the other two types of cones that respond to red and green light in order to explore the many hues we’re missing in the rest of the spectrum. Their lenses could potentially be used to detect camouflage, or to detect counterfeits, as well as at the grocery store to separate ripe fruit and vegetables from those that are too ripe.

No Digital Native

Oh, kids these days. When they want to know something they Google it. When they want to buy something they go to Amazon. When they want to date someone they open Tinder.

It’s almost like they’re from a different country, one where technology has bled into every aspect of life. These so-called “digital natives” are endowed with the ability to seamlessly interact with any device, app or interface, and have migrated many aspects of their lives to the Internet. This is all to the detriment of the “digital immigrants,” those born before roughly 1984, and who have been forced to adapt to unfamiliar and fast-changing technologies.




This line of thinking dates back to 2001, when educator Marc Prensky coined the term in an essay. Digital natives, he claimed, have a newfound faculty with technology, and can handle multiple streams of information because they are adept multitaskers. What’s more, according to Prensky, educators and businesses need to toss out tradition and adapt to appease this new, tech-savvy generation.

But “digital natives” don’t exist—at least according to new research—and it may be a fool’s errand to adapt traditional methods of learning or business to engage a generation steeped in technology.

Tale of the Digital Native

The true existence of digital natives has come under question in the years since, as multiple studies have shown that Millennials don’t necessarily use technology more often and are no better at using basic computer programs and functionalities than older generations. Multitasking has fared little better, as research shows that when asked to do two separate tasks at once, we take the same amount of time and make costly errors. Digital natives do, however, seem to have bought into the myth themselves, with nearly twice as many saying that they are digitally proficient as actually are.

“The answer is not how we can adapt it … we have to treat people as human, cognitive learners and stop considering one specific group to have special powers,” says Paul Kirschner, a professor of educational psychology at the Open University in the Netherlands.

Kirschner, together with his colleague Pedro de Bruyckere, recently authored a review paper on digital natives and multitasking in the journal Teaching and Teacher Education and argues for a shift in the way we think about our relationship to technology. We seem to assume, based on how easily the digital native myth propagated through society, that humans can meld perfectly with the devices and programs we create. As the majority of research on the matter suggests, however, that isn’t the case.

Doctors use the new Google Glass when they see the patient

You could be forgiven for assuming that Glass, Google's head-mounted augmented-reality device, had been effectively dead since 2015. But as Google’s sister company X, the Moonshot Factory, announced on Tuesday, the project has been pivoting to a business-to-business model over the past two years. The new, updated version of the device is known as Glass Enterprise Edition, and it’s been put to use at companies like Boeing, DHL—and in your physician’s office.

Going to the doctor today is “a pretty tragic experience,” says Ian Shakil, the CEO and co-founder of a company called Augmedix. Its platform enables physicians to wear Glass Enterprise Edition as they see patients, while remote medical scribes fill out the electronic medical records based on what they hear and see from the visit.


The doctor’s office experience is unpleasant, Shakil claims, thanks to all the time the physician spends looking at a screen and typing, as opposed to just focusing on interacting with the patient at hand. Augmedix’s message to doctors is: “Put on Glass, go have normal conversations with your patients." Meanwhile, the audio and video streamed from the Glass go to a trained medical scribe, who may be located in a place like California, India, or Bangladesh, and whose job it is to fill in the electronic health records.

The integrated display on Glass can be used to provide the doctor with information about the patient in real time as they perform the examination.

The system is thus a fusion of a high-tech streaming service with a tried-and-true human component on the other end. And while it might seem like AI and voice-recognition software might be well-positioned to do a job like this, Shakil says that what the scribes are doing isn’t transcribing the visits word for word, which would result in a block of text, but instead producing a “structured medical note” from the conversation.

Some people will likely find it creepy that their doctor is beaming audio and video to a remote assistant, especially if they’re in their skivvies. But the final decision about the system’s usage is determined by the patient. It’s not a mandatory part of receiving care.

Patients are informed before they see the doctor that the physician will be using Glass, and have the chance to allow it or not—but Shakil says 98 percent do consent. As for that streaming video, it can be switched off at the appropriate times, and when the video is on, a green light clearly indicates that for the patient. The doctor can switch to audio-only mode to continue the note-taking without video, or the system can be shut off completely.

Solving the Furious Old Mystery of the Rare 'Bright Night'

Solving the Furious Old Mystery of the Rare 'Bright Night'

On This time we wanna tell you about something that's was incredible and very amazing. So you must read carefully and don't forget to share this article to your friends. Check this out.

On rare occasions throughout history, the darkness of night fails to materialize. Even with the moon darkened, the sky fills with a diffuse glow that seems to filter out of the very air itself. Such “bright nights” have been recorded back to the days of Pliny the Elder around 132 B.C., although explanations for the phenomenon have been lacking.

Using a special interferometer and data from the 1990s, two Canadian researchers say that they can explain why the sky seems so much brighter on some nights. Ultraviolet radiation from the sun interacts with oxygen molecules in the atmosphere on a regular basis, occasionally splitting them into negatively charged oxygen ions. When these ions meet each other again and recombine, the reaction gives off energy in the form of visible light.

This reaction is called airglow, and it’s something that researchers have been measuring for well over a century with specialized instruments. It often appears in pictures from the International Space Station, manifesting as a thin green curve hovering above the surface of Earth. The faint gleam isn’t all that rare —the authors estimate that it’s occurring somewhere around the planet about seven percent of the time — but airglow is normally invisible to us down here on the surface, as it is too faint to see.

But on rare occasions airglow lights up the night sky due to the “stacking” of high-altitude atmospheric waves, which can multiply the intensity by a factor of 10. When the wave frequencies align, their amplitudes increase, and when up to four of them combine at a certain longitude, the night sky comes alive.

In a paper published in Geophysical Research Letters, the researchers estimate that this occurs roughly once a year for any given location, although the presence of a full moon or other light source can wash the glow out. When strong storms roil the atmosphere, it can make the bright nights more likely, a factor that may increase in frequency as weather patterns shift around the world.

For many of us, it won’t really make a difference though — the light pollution that emanates from our urban areas swallows up even the strongest airglow for miles after the city limits.

Latest Yeast Tricks Detecting Deadly Pathogens

Yeast, the ubiquitous little fungus that can seemingly do it all, is doing more.

If you aren’t familiar with yeast’s accolades, here’s a refresher: It gives beer its buzz, it can produce textiles, safer opioids, tasty food and is the workhorse model organism in scientific labs around the world. Now, researchers have put yeast to work detecting deadly, pervasive fungal pathogens.


A team of researchers lead by Columbia University’s Virginia Cornish designed an elegantly simple biosensor—it’s a dipstick—using genetically modified Saccharomyces cerevisiae, or baker’s yeast, that signals the presence of fungal pathogens responsible for diseases in humans and agriculture. And unlike so many clinical advances with ambiguous timelines for real-world implementation, researchers say their biosensor, which costs less than a penny, can be deployed around the world right now.

“While at an early stage of implementation, these biosensors can be immediately adopted in the clinic to shorten the time required for diagnosis of fungal pathogens from blood cultures,” researchers wrote in their study, which was published Wednesday in the journal Science Advances.

Fungal pathogens are estimated to cause 2 million deaths annually around the world, in addition to ravaging staple food crops. These pathogens, often detected far too late, tend to have an outsize impact on people living in impoverished, low-resource parts of the planet that lack well equipped clinical and industrial labs. A cheap biosensor would be a boon to global pathogen surveillance efforts, which would undoubtedly save many lives.
How It Works

Cornish and her students replaced cell surface protein receptors on baker’s yeast with receptor proteins specific to a targeted pathogen. In their first trial, researchers swapped in pheromone receptors from Candida albicans, a fungus that occurs naturally in the gut but can cause deadly infections if it runs amok.

Then, Cornish’s team genetically modified yeast and enabled it to produce lycopene when its new receptors detected C. albicans pheromones in a sample. Lycopene is the same compound that gives tomatoes their red color, and when a pathogen was present, researchers’ biosensors turned red. They found equal success applying the same technique to detect 10 more pathogens.

The accuracy of their yeast biosensor went toe-to-toe with current techniques that rely on expensive equipment and personnel, and their biosensors were just as effective detecting pathogens in water, blood, dirt and urine. What’s more, their biosensors can be mass-produced using existing yeast culture technologies, and they are still effective after 38 weeks on the shelf.

The Company Claims To Provide The Lab-Owned Meat by 2018

The Company Claims To Provide The Lab-Owned Meat by 2018

Patties of beef grown in a lab could be hitting supermarket shelves as early as 2018.

That’s the bold statement from Hampton Creek, a San Francisco-based food company that produces mainly vegan condiments and cookie doughs. As the Wall Street Journal reports, the company says they are working on growing cultured animal cells in the lab to turn into cruelty-free meat products, and the product could be ready as early as next year. If the rocky history of lab-grown meats is anything to judge by, however, they have a difficult road ahead of them.


Old Idea, New Tactics

The idea of lab-grown meats dates back decades, and the actual process of coaxing muscle cells to grow in the lab has been achieved since the 1970s. The prospect of actually bringing these artificial meats to the table resurfaced in 2006, when Vladimir Mironov, then at the Medical University of South Carolina, proposed plans for a coffee maker-like machine that would brew up personalized burgers and steaks from cell cultures and growth medium overnight. That project eventually foundered, as we reported this year, but the lure of lab-grown meats remains attractive.

Mark Post, a physiologist at Maastricht University in the Netherlands, unveiled the first actual lab-grown burger in 2013 at a glitzy event in London. It cost $325,000 to produce (although he says costs have since come down), and, according to the tasters, was a bit on the bland side. Post has since formed a company, Mosa Meat, to refine the technology needed to bring costs down, and other groups, such as Memphis Meats, are pursuing a similar goal.

No Easy Task

The challenges they face are multifaceted. The most pressing concern at the moment is scale — while it’s been shown to be possible to grow a hamburger in the lab, that doesn’t mean we’re anywhere near producing them by the millions. It currently takes massive amounts of cultured tissues to produce even one patty, meaning that both physical space and cost requirements far outweigh the returns of growing meat in a lab at the moment. Artificial meat also requires a scaffold to grow on, a structure that will ideally be edible for lab-grown meats and must be stretched, or “exercised” periodically to stimulate growth. And the lab beef grown so far can’t even claim to be cruelty-free yet, as it require fetal calf serum for sustenance.


Once those issues have been resolved, consumers will need to be convinced both that lab-grown meats are safe, and that they taste as good as the real thing.

Computer Algorithm Can Beat Drunk Tweeter

Drunk tweets, long considered an unfortunate, yet ubiquitous, a by product of the social media age, have finally been put to good use.

With the help of a machine-learning algorithm, researchers from the University of Rochester cross-referenced tweets mentioning alcohol consumption with geo-tagging information to broadly analyze human drinking behavior. They were able to estimate where and when people imbibed, and, to a limited extent, how they behaved under the influence. The experiment is more than a social critique — the algorithm helps researchers spot drinking patterns that could inform public health decisions and could be applied to a range of other human behaviors.

#Drunk Tweeting

To begin with, the researchers sorted through a selection of tweets from both New York City and rural New York with the help of Amazon’s Mechanical Turk. Users identified tweets related to drinking and picked out keywords, such as “drunk,” “vodka” and “get wasted,” to train an algorithm.

They put each relevant tweet through a series of increasingly stringent questions to home in on tweets that not only referenced the author drinking but indicated that they were doing so while sending the tweet. That way, they could determine whether a person was actually tweeting and drinking, or just sending tweets about drinking. Once they had built up a dependable database of keywords, they were able to fine-tune their algorithm so it could recognize words and locations that likely proved people were drinking.

To get tweeters’ locations, they used only tweets that had been geo-tagged with Twitter’s “check-in” feature. They then approximated users’ home locations by checking where they were when they sent tweets in the evenings, in addition to tweets containing words like “home” or “bed.” This lets them know whether users’ preferred to drink at home or out at bars or restaurants.

Combining these two datasets gave the researchers a broad idea of how many people in a given area or at a given time were drinking. Not surprisingly, they found a correlation between the number of bars and how much people drank — more bars meant more drunk people. New York City saw a stronger correlation between the two, proving that people in the big city really do like to drink more. Rather paradoxically, their data also showed that city dweller were more likely to tweet about drinking at home as well.

Their work builds on previous studies that attempted to tie people’s tweets to specific activities and locations. By using the check-in feature, they say that their system is much more accurate than others, and can reliably place people within a block of their actual location. They published their work on the pre-print server arXiv.

How to Train Your Robot by Brain Oops Signals

Baxter the robot can tell the difference between right and wrong actions without its human handlers ever consciously giving a command or even speaking a word. The robot’s learning success relies upon a system that interprets the human brain’s “oops” signals on letting Baxter know if a mistake has been made.

The new twist on training robots comes from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University. Researchers have long known that the human brain generates certain error-related signals when it notices a mistake. They created machine-learning software that can recognize and classify those brain oops signals from individual human volunteers within 10 to 30 milliseconds—a way of creating instant feedback for Baxter the robot when it sorted paint cans and wire spools into two different bins in front of the humans.
The human volunteers wore electroencephalography (EEG) caps that can detect those oops signals when they see Baxter the robot making a mistake. Each volunteer first underwent a short training session where the machine-learning software learned to recognize their brains’ specific “oops” signals. But once that was completed, the system was able to start giving Baxter instant feedback on whether each human handler approved or disapproved of the robot’s actions.

It’s still far from a perfect system, or even a 90-percent accuracy system when performing in real time. But researchers seem confident based on the early trials.

The MIT and Boston University researchers also discovered that they could improve the system’s offline performance by focusing on stronger oops signals that the brain generates when it notices so-called “secondary errors.” These errors came up when the system misclassified the human brain signals by either falsely detecting an oops signal when the robot was making the correct choice, or when the system failed to detect the initial oops signal when the robot was making the wrong choice.

By incorporating the oops signals from secondary errors, researchers succeeded in boosting the system’s overall performance by almost 20 percent. The system cannot yet process the oops signals from secondary errors in actual live training sessions with Baxter. But once it can, researchers expect to boost the overall system accuracy beyond 90 percent.

The research also stands out because it showed how people who had never tried the EEG caps before could still learn to train Baxter the robot without much trouble. That bodes well for the possibilities of humans intuitively relying on EEG to train their future robot cars, robot humanoids or similar robotic systems.

How To Follow Fish using amazing tool

How To Follow Fish using amazing tool

Do you like to swim in the sea? and did you know how to follow the fish at deep sea? check this out

1. Jaw tags: Researchers use these external tags, about the size of a quarter, to identify whether a fish has an internal tracker implanted in it. It’s usually fit onto a fish’s lower mandible.

2. Pop-up satellite archival tag: This external tag collects detailed data on fish vitals, location and environmental information, such as light levels.

3. Acoustic transmitter: Internal or external tag that’s ideal for tracking fish in deeper water. These transmitters produce a unique set of pings that get assigned to an individual fish. To collect data, researchers either go out in a vessel to pick up signals or download the information from receivers stationed in the fish’s environment.

4. Cinch tags: This type of external tag indicates the fish bearing it is part of a study and lists contact information for the agency monitoring it. If recreational fishers reel in a catch with a cinch tag, they should report it to the agency listed on the tag.

5. Large-scale model of coded wire: Unlike the model shown here, these internal tags have a true diameter similar to that of mechanical pencil lead. The wire comes on a spool and is lined with imprinted numbers. When a biologist cuts off a piece to make a tag, a unique serial number will be paired with a fish. To read this number, researchers need magnification equipment.

6. T-bar tags: External tags that come in a variety of colors. Similar to cinch tags, T-bars flag the fish as part of a study.

7. Radio telemetry tag: Internal tag for tracking fish in shallower waters. Researchers use an antenna, either handheld or secured beneath a plane or boat, to pick up the tag’s radio signal.

8. Visual Implant (VI) tags: Internal tags mixed with brightly colored biocompatible substances that researchers implant into translucent sections of a fish. While it’s possible to spot them with the naked eye, researchers typically need fluorescent light or magnification to see VI tags. Different colors can indicate details such as the year a fish was tagged for study.

9. Hydrostatic tags: Much like cinch tags, these external tags flag the fish as being part of a study.

10. Passive integrated transponder (PIT) tags: An internal tag that biologists must scan to activate. PITs relay data on fish growth rates and movement patterns to a receiver.

Algortihm Can Choose Next Silicon Valley Unicorn

Algortihm Can Choose Next Silicon Valley Unicorn


In the world of venture capitalists, not everyone is Peter Thiel. The Silicon Valley investor reaped 1 billion dollars in 2012 when he cashed in his Facebook stocks, turning a 2,000 percent profit from his initial $500,000 investment. Stories like Thiel’s may be inspirational, but they are by far the outlier. The start-up world sees thousands of hopeful companies pass through each year. Only a fraction of those ever returns a profit.



Picking a winner, the elusive “unicorn,” is as much a matter of luck as it is hard numbers. Factors like founder experience, workplace dynamics, skill levels and product quality all matter, of course, but there countless other variables that can spell heartbreak for an aspirational young company. Successful venture capital firms claim to know the secret to success in Silicon Valley, but it can still be a harrowing game to play.

Chasing Unicorns

Humans just aren’t very good at objectively sorting through thousands of seemingly unrelated factors to pick out the subtle trends that mark successful companies. This kind of work, however, is where machine learning programs excel. Two researchers at MIT have developed a custom algorithm aimed at doing exactly that and trained it on a database of 83,000 start-up companies. This allowed them to sift out the factors that were best correlated with success — in this case, a company being acquired or reaching an IPO, both situations that pay off handsomely for investors.


What’s the Secret Recipe?

They found that one of the biggest predictors of success was how start-ups moved through rounds of funding. And it wasn’t the slow and steady companies that were hitting it big, it was the ones that moved most erratically, pausing at one level of funding and then rocketing through the next few. How this plays into start-up success isn’t completely understood at the moment though.

The researchers say that their algorithm could be applied to much more than just nascent tech companies. The same principles that allow it to pick a handful of winners from a crowd of dudes should also apply in areas as diverse as the pharmaceutical industry and the movie business, where just a few successes can pay out billions. These are fields where the top players are lionized for their ability to sniff out winners and reap the substantial rewards. As with factory workers, bank tellers and telemarketers, the robots could be coming for their jobs as well.

Hacking a Doomsday Top Self-Driving Car Fears Online

Silicon Valley tech giants and Detroit automakers have to convince people to trust self-driving cars before they can sell the futuristic technology to customers. That may prove tricky considering the public’s lingering fears and concerns regarding self-driving cars. A recent AI-assisted analysis of more than one trillion social posts revealed that scared-face emoticons related to self-driving cars rose from 30 percent of all emoticons used on the topic to 50 percent by 2016. Top concerns mentioned in social media included self-driving car fears of being hacked and “robot apocalypse” scenarios of technological change.

It would be silly to interpret “scared face” emoticons and emoji posted on online as being fully representative of public attitudes toward self-driving cars. But it’s still a useful sign of the public relations challenge facing companies hoping to sell the self-driving car future to the broader public–especially given that about 70 percent of all Americans use some form of social media. The recent social media findings by Crimson Hexagon, an analytics company based in Boston, also generally line up with previous studies on public attitudes toward self-driving cars. Any company that wants to sell a positive image of self-driving cars will almost inevitably have to confront online narratives of nightmare hacking scenarios and doomsday visions.

Crimson Hexagon’s report looked at one trillion social posts from sites such as Twitter, Facebook, Instagram, Reddit, and online forums, as well as some car-specific sites such as Autotrader and Edmunds. The company performed the analysis using machine learning–a fairly common AI technology nowadays–to sift through patterns in words and emoticons within the social media posts. The machine learning AI trained specifically on using natural language processing and was also able to help identify some of the emotional sentiment behind certain words or phrases.

Concerns about self-driving cars being vulnerable to hacker attacks seemed fairly prominent with 18,000 mentions. These self-driving car fears were often driven by mainstream news reports that discussed hacking vulnerabilities and self-driving car safety. But doomsday fears of technological change that revolved around “apocalypse, doomsday and the destruction of humanity” came in close behind with 17,000 mentions.

The online talk was not all doom and gloom surrounding self-driving cars. About 6,000 social posts focused on the positive side of self-driving cars as a “technological revolution that harnesses big data and machine learning.” Another 7,000 social posts discussed self-driving cars as a possible solution to traffic jams and highway congestion even as they also featured angry venting. And 4,000 social posts talked up the innovation behind self-driving cars and their awe of the entrepreneurs and engineers developing such technologies.

Turning Anything into a Touchscreen With 'Electric'

Buttons, who needs ’em?


A new proof-of-concept technology from Carnegie Mellon University turns everyday objects into touch interfaces with() an array of electrodes. Walls, guitars, toys and steering wheels come alive with touch sensitivity in their video, and it seems that the possibilities are pretty much endless. What could be next? Grocery store aisles? Whole buildings? Other people? Cell phones?

The design is called Electrick, and it comes from Carnegie Mellon’s Future Interfaces Group and takes advantage of the same principle that your smartphone screen. Because our skin is conductive, when we touch a surface with electricity running through it we alter the electric field in a predictable way. By coating objects with electrically conductive materials and surrounding them with electrodes, the team can triangulate the position of a finger based on fluctuations in the field. Combined with a microprocessor, they can train their program to translate swipes and taps into commands.

They experimented with a few different application methods. Vacuum forming works for simple shapes, while a spray-on version coats even irregular objects, such as a guitar or a miniature Yoda head. Materials can also be custom molded or 3-D printed, and it appears that Electrick even works with Play-doh and jello.

Some of the more practical applications include prototyping controller designs and modifying laptops and surfaces to run programs with a single touch, but the sky is really the limit here. Turn on your lights with the refrigerator. Play Halo with your coffee table. Change the channel with your cat (maybe not). You can imagine a future where any surface is a potential control device — and the attendant embarrassment when sitting down in the wrong place causes the blender to erupt.

Their system is low-cost and widely applicable, they say, and the only downside at the moment is that the presence of an electromagnetic field from other powered objects nearby can interfere with the accuracy of the system. They are currently working on ways to get around that.

Panther Drone Deliver Packages by Water and Land

A four-wheeled drone’s first aerial package delivery test showed off a special touch by also driving up to the doorstep of its pretend customer. That capability to deliver by both air and land makes the Panther drone an unusual competitor in the crowded drone delivery space. But the drone’s limited delivery range may pose a challenge in competing against the delivery drones of Google and Amazon.

Unlike most delivery drones designed purely for flight, the Panther drone resembles a four-wheeled robot with six rotors extending out from its sides. That design leverages the earlier “flying car” development efforts of Advanced Tactics Inc., a company based in Southern California. Previously, Advanced Tactics spent time developing it's “Black Knight Transformer” flying car with U.S. military missions in mind. The Panther drone appears to be a miniaturized version of the larger Transformer with commercial drone delivery as one of the several new possible roles.

“The Panther can fly at over 70 mph and has a flight time with a five-pound package of well over six minutes,” says Don Shaw, CEO of Advanced Tactics Inc. “With a two-pound package and larger battery, it can fly well over nine minutes.”


Panther Drone Tradeoffs

The good news for the Panther’s delivery drone aspirations is that its versatility could make it easier to deliver packages. Delivery drones will eventually face the challenge of navigating neighborhoods with obstacles such as trees and power lines. In addition, drone developers must figure out how the drones will safely deliver packages into the hands of customers without risking any drone-on-human accidents.

Some delivery drone efforts such as Google’s Project Wing have attempted workaround solutions such as lowering burrito deliveries to the ground with a cable. By comparison, the Panther could simply land in any open area—such as on a local road—and then drive to the doorstep of customers. It could even drive inside the doorways of businesses or access warehouses through their loading bays.

But air and ground versatility may have come at the cost of delivery range. That is because the ground mobility drivetrain adds extra weight that the Panther drone must expend battery power on lifting whenever it flies through the air. A future version of the Panther drone with a robotic arm to handle packages could potentially be heavier and shorten the delivery range even more. (On the other hand, Shaw pointed out that the drone can drive for hours on the ground at up to five miles per hour.)

The Electric Lilium Jet Directive on the Future of Air Taxi

The old science fiction fantasy of a flying car that both drives on the ground and flies in the air is unlikely to revolutionize daily commutes. Instead, Silicon Valley tech entrepreneurs and aerospace companies dream of electric-powered aircraft that can take off vertically like helicopters but have the flight efficiency of airplanes. The German startup Lilium took a very public step forward in that direction by demonstrating the first electric-powered jet capable of vertical takeoff and landing last week.

The Lilium Jet prototype that made its maiden debut resembles a flattened pod with stubby thrusters in front and a longer wing with engines in the back. The final design concept shows two wings hold a combined 36 Electric turbofan engines that can tilt to provide both vertical lifting thrust and horizontal thrust for forwarding flight. Such electric engines powered by lithium-ion batteries could enable a quieter breed of aircraft that someday cut travel times for ride-hailing commuters from hours to minutes in cities such as San Francisco or New York. On its website, Lilium promises an air taxi that could eventually carry up to five people at speeds of 190 miles per hour: about the same speed as a Formula One racing car. And it’s promising that passengers could begin booking the Lilium Jet as part of an air taxi service by 2025.

“From a technology point of view, there is not a challenge that cannot be solved,” says Patrick Nathen, a cofounder, and head of calculation and design for Lilium. “The biggest challenge right now is to build the company as fast as possible in order to catch that timeline.”

Nathen and his cofounders met just three and a half years ago. But within that short time, they put together a small team and began proving their dream of an electric jet capable of vertical takeoff and landing (VTOL). Lilium began with seed funding from a tech incubator under the European Space Agency but has since attracted financial backing from private investors and venture capital firms.

Getting Lilium off the ground probably would not have been possible just five years ago, Nathen says. But the team took full advantage of the recent technological changes that have lowered the price on both materials—such as electric circuits and motors—and manufacturing processes such as 3D printing. Lower costs enabled Lilium to quickly and cheaply begin assembling prototypes to prove that their computer simulations could really deliver on the idea of an electric VTOL jet.

Useful Ways to Solve Crime



The thrill of a crime story is the unfolding of “whodunnit,” often against a backdrop of very little evidence. Positively identifying a suspect, even with a photo of her face, is challenging enough. But what if the only evidence available is a grainy image of a suspect’s hand?

Thanks to a group at the University of Dundee in the UK, that’s enough information to positively ID the perp.

The Centre for Anatomy and Human Identification (CAHID) can assess vein patterns, scars, nail beds, skin pigmentation and knuckle creases from images of hands to show, with high reliability, that police got the right person in several very serious court cases in the UK. CAHID specializes in the human identification and was also the group that famously reconstructed King Richard III’s face after his body was found in a car park in Leicester in 2012.

In the Dark

The technique was born in 2006 when local police came to the team with a Skype video recorded in the dark, which had been languishing on their desks for some time. The dark recording conditions meant that images were taken in infrared light, and just a shot of a hand and forearm were in view. That was enough for the team to match the superficial vein patterns in both the offender and suspect with high reliability.

“The infrared light interacts with the deoxygenated blood in the veins so you can see them as black lines,” says Professor Dame Sue Black, who led the research. “You are actually seeing the absorption of the infrared light into the deoxygenated blood.” Black is an expert in forensic anthropology who has been crucial in high-profile criminal cases in the UK and headed the British Forensic Team’s exhumation of mass graves in Kosovo in 1999.

Building a Research Basis

Since that first case in 2006, CAHID has used the method in roughly 30 or 40 cases per year, and the team has also applied this procedure to intelligence and counter terrorism work. They have been hard at work trying to establish an academic explanation for their method over the past decade.

“It is important that we are able to say with some degree of reliability that we can exclude a suspect, or say there is a strong likelihood this is the same individual,” says Black.

In order to develop their technique further, CAHID created a database of 500 police officers’ arms and hands, taken in both visible and infrared light.