Tuesday, August 15, 2017

How to Train Your Robot by Brain Oops Signals

Baxter the robot can tell the difference between right and wrong actions without its human handlers ever consciously giving a command or even speaking a word. The robot’s learning success relies upon a system that interprets the human brain’s “oops” signals on letting Baxter know if a mistake has been made.

The new twist on training robots comes from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University. Researchers have long known that the human brain generates certain error-related signals when it notices a mistake. They created machine-learning software that can recognize and classify those brain oops signals from individual human volunteers within 10 to 30 milliseconds—a way of creating instant feedback for Baxter the robot when it sorted paint cans and wire spools into two different bins in front of the humans.
The human volunteers wore electroencephalography (EEG) caps that can detect those oops signals when they see Baxter the robot making a mistake. Each volunteer first underwent a short training session where the machine-learning software learned to recognize their brains’ specific “oops” signals. But once that was completed, the system was able to start giving Baxter instant feedback on whether each human handler approved or disapproved of the robot’s actions.

It’s still far from a perfect system, or even a 90-percent accuracy system when performing in real time. But researchers seem confident based on the early trials.

The MIT and Boston University researchers also discovered that they could improve the system’s offline performance by focusing on stronger oops signals that the brain generates when it notices so-called “secondary errors.” These errors came up when the system misclassified the human brain signals by either falsely detecting an oops signal when the robot was making the correct choice, or when the system failed to detect the initial oops signal when the robot was making the wrong choice.

By incorporating the oops signals from secondary errors, researchers succeeded in boosting the system’s overall performance by almost 20 percent. The system cannot yet process the oops signals from secondary errors in actual live training sessions with Baxter. But once it can, researchers expect to boost the overall system accuracy beyond 90 percent.

The research also stands out because it showed how people who had never tried the EEG caps before could still learn to train Baxter the robot without much trouble. That bodes well for the possibilities of humans intuitively relying on EEG to train their future robot cars, robot humanoids or similar robotic systems.

Saturday, August 12, 2017

How To Follow Fish using amazing tool

How To Follow Fish using amazing tool

Do you like to swim in the sea? and did you know how to follow the fish at deep sea? check this out

1. Jaw tags: Researchers use these external tags, about the size of a quarter, to identify whether a fish has an internal tracker implanted in it. It’s usually fit onto a fish’s lower mandible.

2. Pop-up satellite archival tag: This external tag collects detailed data on fish vitals, location and environmental information, such as light levels.

3. Acoustic transmitter: Internal or external tag that’s ideal for tracking fish in deeper water. These transmitters produce a unique set of pings that get assigned to an individual fish. To collect data, researchers either go out in a vessel to pick up signals or download the information from receivers stationed in the fish’s environment.

4. Cinch tags: This type of external tag indicates the fish bearing it is part of a study and lists contact information for the agency monitoring it. If recreational fishers reel in a catch with a cinch tag, they should report it to the agency listed on the tag.

5. Large-scale model of coded wire: Unlike the model shown here, these internal tags have a true diameter similar to that of mechanical pencil lead. The wire comes on a spool and is lined with imprinted numbers. When a biologist cuts off a piece to make a tag, a unique serial number will be paired with a fish. To read this number, researchers need magnification equipment.

6. T-bar tags: External tags that come in a variety of colors. Similar to cinch tags, T-bars flag the fish as part of a study.

7. Radio telemetry tag: Internal tag for tracking fish in shallower waters. Researchers use an antenna, either handheld or secured beneath a plane or boat, to pick up the tag’s radio signal.

8. Visual Implant (VI) tags: Internal tags mixed with brightly colored biocompatible substances that researchers implant into translucent sections of a fish. While it’s possible to spot them with the naked eye, researchers typically need fluorescent light or magnification to see VI tags. Different colors can indicate details such as the year a fish was tagged for study.

9. Hydrostatic tags: Much like cinch tags, these external tags flag the fish as being part of a study.

10. Passive integrated transponder (PIT) tags: An internal tag that biologists must scan to activate. PITs relay data on fish growth rates and movement patterns to a receiver.

Thursday, August 10, 2017

Algortihm Can Choose Next Silicon Valley Unicorn

Algortihm Can Choose Next Silicon Valley Unicorn


In the world of venture capitalists, not everyone is Peter Thiel. The Silicon Valley investor reaped 1 billion dollars in 2012 when he cashed in his Facebook stocks, turning a 2,000 percent profit from his initial $500,000 investment. Stories like Thiel’s may be inspirational, but they are by far the outlier. The start-up world sees thousands of hopeful companies pass through each year. Only a fraction of those ever returns a profit.



Picking a winner, the elusive “unicorn,” is as much a matter of luck as it is hard numbers. Factors like founder experience, workplace dynamics, skill levels and product quality all matter, of course, but there countless other variables that can spell heartbreak for an aspirational young company. Successful venture capital firms claim to know the secret to success in Silicon Valley, but it can still be a harrowing game to play.

Chasing Unicorns

Humans just aren’t very good at objectively sorting through thousands of seemingly unrelated factors to pick out the subtle trends that mark successful companies. This kind of work, however, is where machine learning programs excel. Two researchers at MIT have developed a custom algorithm aimed at doing exactly that and trained it on a database of 83,000 start-up companies. This allowed them to sift out the factors that were best correlated with success — in this case, a company being acquired or reaching an IPO, both situations that pay off handsomely for investors.


What’s the Secret Recipe?

They found that one of the biggest predictors of success was how start-ups moved through rounds of funding. And it wasn’t the slow and steady companies that were hitting it big, it was the ones that moved most erratically, pausing at one level of funding and then rocketing through the next few. How this plays into start-up success isn’t completely understood at the moment though.

The researchers say that their algorithm could be applied to much more than just nascent tech companies. The same principles that allow it to pick a handful of winners from a crowd of dudes should also apply in areas as diverse as the pharmaceutical industry and the movie business, where just a few successes can pay out billions. These are fields where the top players are lionized for their ability to sniff out winners and reap the substantial rewards. As with factory workers, bank tellers and telemarketers, the robots could be coming for their jobs as well.

Saturday, July 29, 2017

Hacking a Doomsday Top Self-Driving Car Fears Online

Silicon Valley tech giants and Detroit automakers have to convince people to trust self-driving cars before they can sell the futuristic technology to customers. That may prove tricky considering the public’s lingering fears and concerns regarding self-driving cars. A recent AI-assisted analysis of more than one trillion social posts revealed that scared-face emoticons related to self-driving cars rose from 30 percent of all emoticons used on the topic to 50 percent by 2016. Top concerns mentioned in social media included self-driving car fears of being hacked and “robot apocalypse” scenarios of technological change.

It would be silly to interpret “scared face” emoticons and emoji posted on online as being fully representative of public attitudes toward self-driving cars. But it’s still a useful sign of the public relations challenge facing companies hoping to sell the self-driving car future to the broader public–especially given that about 70 percent of all Americans use some form of social media. The recent social media findings by Crimson Hexagon, an analytics company based in Boston, also generally line up with previous studies on public attitudes toward self-driving cars. Any company that wants to sell a positive image of self-driving cars will almost inevitably have to confront online narratives of nightmare hacking scenarios and doomsday visions.

Crimson Hexagon’s report looked at one trillion social posts from sites such as Twitter, Facebook, Instagram, Reddit, and online forums, as well as some car-specific sites such as Autotrader and Edmunds. The company performed the analysis using machine learning–a fairly common AI technology nowadays–to sift through patterns in words and emoticons within the social media posts. The machine learning AI trained specifically on using natural language processing and was also able to help identify some of the emotional sentiment behind certain words or phrases.

Concerns about self-driving cars being vulnerable to hacker attacks seemed fairly prominent with 18,000 mentions. These self-driving car fears were often driven by mainstream news reports that discussed hacking vulnerabilities and self-driving car safety. But doomsday fears of technological change that revolved around “apocalypse, doomsday and the destruction of humanity” came in close behind with 17,000 mentions.

The online talk was not all doom and gloom surrounding self-driving cars. About 6,000 social posts focused on the positive side of self-driving cars as a “technological revolution that harnesses big data and machine learning.” Another 7,000 social posts discussed self-driving cars as a possible solution to traffic jams and highway congestion even as they also featured angry venting. And 4,000 social posts talked up the innovation behind self-driving cars and their awe of the entrepreneurs and engineers developing such technologies.

Turning Anything into a Touchscreen With 'Electric'

Buttons, who needs ’em?


A new proof-of-concept technology from Carnegie Mellon University turns everyday objects into touch interfaces with() an array of electrodes. Walls, guitars, toys and steering wheels come alive with touch sensitivity in their video, and it seems that the possibilities are pretty much endless. What could be next? Grocery store aisles? Whole buildings? Other people? Cell phones?

The design is called Electrick, and it comes from Carnegie Mellon’s Future Interfaces Group and takes advantage of the same principle that your smartphone screen. Because our skin is conductive, when we touch a surface with electricity running through it we alter the electric field in a predictable way. By coating objects with electrically conductive materials and surrounding them with electrodes, the team can triangulate the position of a finger based on fluctuations in the field. Combined with a microprocessor, they can train their program to translate swipes and taps into commands.

They experimented with a few different application methods. Vacuum forming works for simple shapes, while a spray-on version coats even irregular objects, such as a guitar or a miniature Yoda head. Materials can also be custom molded or 3-D printed, and it appears that Electrick even works with Play-doh and jello.

Some of the more practical applications include prototyping controller designs and modifying laptops and surfaces to run programs with a single touch, but the sky is really the limit here. Turn on your lights with the refrigerator. Play Halo with your coffee table. Change the channel with your cat (maybe not). You can imagine a future where any surface is a potential control device — and the attendant embarrassment when sitting down in the wrong place causes the blender to erupt.

Their system is low-cost and widely applicable, they say, and the only downside at the moment is that the presence of an electromagnetic field from other powered objects nearby can interfere with the accuracy of the system. They are currently working on ways to get around that.

Panther Drone Deliver Packages by Water and Land

A four-wheeled drone’s first aerial package delivery test showed off a special touch by also driving up to the doorstep of its pretend customer. That capability to deliver by both air and land makes the Panther drone an unusual competitor in the crowded drone delivery space. But the drone’s limited delivery range may pose a challenge in competing against the delivery drones of Google and Amazon.

Unlike most delivery drones designed purely for flight, the Panther drone resembles a four-wheeled robot with six rotors extending out from its sides. That design leverages the earlier “flying car” development efforts of Advanced Tactics Inc., a company based in Southern California. Previously, Advanced Tactics spent time developing it's “Black Knight Transformer” flying car with U.S. military missions in mind. The Panther drone appears to be a miniaturized version of the larger Transformer with commercial drone delivery as one of the several new possible roles.

“The Panther can fly at over 70 mph and has a flight time with a five-pound package of well over six minutes,” says Don Shaw, CEO of Advanced Tactics Inc. “With a two-pound package and larger battery, it can fly well over nine minutes.”


Panther Drone Tradeoffs

The good news for the Panther’s delivery drone aspirations is that its versatility could make it easier to deliver packages. Delivery drones will eventually face the challenge of navigating neighborhoods with obstacles such as trees and power lines. In addition, drone developers must figure out how the drones will safely deliver packages into the hands of customers without risking any drone-on-human accidents.

Some delivery drone efforts such as Google’s Project Wing have attempted workaround solutions such as lowering burrito deliveries to the ground with a cable. By comparison, the Panther could simply land in any open area—such as on a local road—and then drive to the doorstep of customers. It could even drive inside the doorways of businesses or access warehouses through their loading bays.

But air and ground versatility may have come at the cost of delivery range. That is because the ground mobility drivetrain adds extra weight that the Panther drone must expend battery power on lifting whenever it flies through the air. A future version of the Panther drone with a robotic arm to handle packages could potentially be heavier and shorten the delivery range even more. (On the other hand, Shaw pointed out that the drone can drive for hours on the ground at up to five miles per hour.)

The Electric Lilium Jet Directive on the Future of Air Taxi

The old science fiction fantasy of a flying car that both drives on the ground and flies in the air is unlikely to revolutionize daily commutes. Instead, Silicon Valley tech entrepreneurs and aerospace companies dream of electric-powered aircraft that can take off vertically like helicopters but have the flight efficiency of airplanes. The German startup Lilium took a very public step forward in that direction by demonstrating the first electric-powered jet capable of vertical takeoff and landing last week.

The Lilium Jet prototype that made its maiden debut resembles a flattened pod with stubby thrusters in front and a longer wing with engines in the back. The final design concept shows two wings hold a combined 36 Electric turbofan engines that can tilt to provide both vertical lifting thrust and horizontal thrust for forwarding flight. Such electric engines powered by lithium-ion batteries could enable a quieter breed of aircraft that someday cut travel times for ride-hailing commuters from hours to minutes in cities such as San Francisco or New York. On its website, Lilium promises an air taxi that could eventually carry up to five people at speeds of 190 miles per hour: about the same speed as a Formula One racing car. And it’s promising that passengers could begin booking the Lilium Jet as part of an air taxi service by 2025.

“From a technology point of view, there is not a challenge that cannot be solved,” says Patrick Nathen, a cofounder, and head of calculation and design for Lilium. “The biggest challenge right now is to build the company as fast as possible in order to catch that timeline.”

Nathen and his cofounders met just three and a half years ago. But within that short time, they put together a small team and began proving their dream of an electric jet capable of vertical takeoff and landing (VTOL). Lilium began with seed funding from a tech incubator under the European Space Agency but has since attracted financial backing from private investors and venture capital firms.

Getting Lilium off the ground probably would not have been possible just five years ago, Nathen says. But the team took full advantage of the recent technological changes that have lowered the price on both materials—such as electric circuits and motors—and manufacturing processes such as 3D printing. Lower costs enabled Lilium to quickly and cheaply begin assembling prototypes to prove that their computer simulations could really deliver on the idea of an electric VTOL jet.