Latest Yeast Tricks Detecting Deadly Pathogens

Yeast, the ubiquitous little fungus that can seemingly do it all, is doing more.

If you aren’t familiar with yeast’s accolades, here’s a refresher: It gives beer its buzz, it can produce textiles, safer opioids, tasty food and is the workhorse model organism in scientific labs around the world. Now, researchers have put yeast to work detecting deadly, pervasive fungal pathogens.


A team of researchers lead by Columbia University’s Virginia Cornish designed an elegantly simple biosensor—it’s a dipstick—using genetically modified Saccharomyces cerevisiae, or baker’s yeast, that signals the presence of fungal pathogens responsible for diseases in humans and agriculture. And unlike so many clinical advances with ambiguous timelines for real-world implementation, researchers say their biosensor, which costs less than a penny, can be deployed around the world right now.

“While at an early stage of implementation, these biosensors can be immediately adopted in the clinic to shorten the time required for diagnosis of fungal pathogens from blood cultures,” researchers wrote in their study, which was published Wednesday in the journal Science Advances.

Fungal pathogens are estimated to cause 2 million deaths annually around the world, in addition to ravaging staple food crops. These pathogens, often detected far too late, tend to have an outsize impact on people living in impoverished, low-resource parts of the planet that lack well equipped clinical and industrial labs. A cheap biosensor would be a boon to global pathogen surveillance efforts, which would undoubtedly save many lives.
How It Works

Cornish and her students replaced cell surface protein receptors on baker’s yeast with receptor proteins specific to a targeted pathogen. In their first trial, researchers swapped in pheromone receptors from Candida albicans, a fungus that occurs naturally in the gut but can cause deadly infections if it runs amok.

Then, Cornish’s team genetically modified yeast and enabled it to produce lycopene when its new receptors detected C. albicans pheromones in a sample. Lycopene is the same compound that gives tomatoes their red color, and when a pathogen was present, researchers’ biosensors turned red. They found equal success applying the same technique to detect 10 more pathogens.

The accuracy of their yeast biosensor went toe-to-toe with current techniques that rely on expensive equipment and personnel, and their biosensors were just as effective detecting pathogens in water, blood, dirt and urine. What’s more, their biosensors can be mass-produced using existing yeast culture technologies, and they are still effective after 38 weeks on the shelf.

The Company Claims To Provide The Lab-Owned Meat by 2018

The Company Claims To Provide The Lab-Owned Meat by 2018

Patties of beef grown in a lab could be hitting supermarket shelves as early as 2018.

That’s the bold statement from Hampton Creek, a San Francisco-based food company that produces mainly vegan condiments and cookie doughs. As the Wall Street Journal reports, the company says they are working on growing cultured animal cells in the lab to turn into cruelty-free meat products, and the product could be ready as early as next year. If the rocky history of lab-grown meats is anything to judge by, however, they have a difficult road ahead of them.


Old Idea, New Tactics

The idea of lab-grown meats dates back decades, and the actual process of coaxing muscle cells to grow in the lab has been achieved since the 1970s. The prospect of actually bringing these artificial meats to the table resurfaced in 2006, when Vladimir Mironov, then at the Medical University of South Carolina, proposed plans for a coffee maker-like machine that would brew up personalized burgers and steaks from cell cultures and growth medium overnight. That project eventually foundered, as we reported this year, but the lure of lab-grown meats remains attractive.

Mark Post, a physiologist at Maastricht University in the Netherlands, unveiled the first actual lab-grown burger in 2013 at a glitzy event in London. It cost $325,000 to produce (although he says costs have since come down), and, according to the tasters, was a bit on the bland side. Post has since formed a company, Mosa Meat, to refine the technology needed to bring costs down, and other groups, such as Memphis Meats, are pursuing a similar goal.

No Easy Task

The challenges they face are multifaceted. The most pressing concern at the moment is scale — while it’s been shown to be possible to grow a hamburger in the lab, that doesn’t mean we’re anywhere near producing them by the millions. It currently takes massive amounts of cultured tissues to produce even one patty, meaning that both physical space and cost requirements far outweigh the returns of growing meat in a lab at the moment. Artificial meat also requires a scaffold to grow on, a structure that will ideally be edible for lab-grown meats and must be stretched, or “exercised” periodically to stimulate growth. And the lab beef grown so far can’t even claim to be cruelty-free yet, as it require fetal calf serum for sustenance.


Once those issues have been resolved, consumers will need to be convinced both that lab-grown meats are safe, and that they taste as good as the real thing.

Computer Algorithm Can Beat Drunk Tweeter

Drunk tweets, long considered an unfortunate, yet ubiquitous, a by product of the social media age, have finally been put to good use.

With the help of a machine-learning algorithm, researchers from the University of Rochester cross-referenced tweets mentioning alcohol consumption with geo-tagging information to broadly analyze human drinking behavior. They were able to estimate where and when people imbibed, and, to a limited extent, how they behaved under the influence. The experiment is more than a social critique — the algorithm helps researchers spot drinking patterns that could inform public health decisions and could be applied to a range of other human behaviors.

#Drunk Tweeting

To begin with, the researchers sorted through a selection of tweets from both New York City and rural New York with the help of Amazon’s Mechanical Turk. Users identified tweets related to drinking and picked out keywords, such as “drunk,” “vodka” and “get wasted,” to train an algorithm.

They put each relevant tweet through a series of increasingly stringent questions to home in on tweets that not only referenced the author drinking but indicated that they were doing so while sending the tweet. That way, they could determine whether a person was actually tweeting and drinking, or just sending tweets about drinking. Once they had built up a dependable database of keywords, they were able to fine-tune their algorithm so it could recognize words and locations that likely proved people were drinking.

To get tweeters’ locations, they used only tweets that had been geo-tagged with Twitter’s “check-in” feature. They then approximated users’ home locations by checking where they were when they sent tweets in the evenings, in addition to tweets containing words like “home” or “bed.” This lets them know whether users’ preferred to drink at home or out at bars or restaurants.

Combining these two datasets gave the researchers a broad idea of how many people in a given area or at a given time were drinking. Not surprisingly, they found a correlation between the number of bars and how much people drank — more bars meant more drunk people. New York City saw a stronger correlation between the two, proving that people in the big city really do like to drink more. Rather paradoxically, their data also showed that city dweller were more likely to tweet about drinking at home as well.

Their work builds on previous studies that attempted to tie people’s tweets to specific activities and locations. By using the check-in feature, they say that their system is much more accurate than others, and can reliably place people within a block of their actual location. They published their work on the pre-print server arXiv.

How to Train Your Robot by Brain Oops Signals

Baxter the robot can tell the difference between right and wrong actions without its human handlers ever consciously giving a command or even speaking a word. The robot’s learning success relies upon a system that interprets the human brain’s “oops” signals on letting Baxter know if a mistake has been made.

The new twist on training robots comes from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University. Researchers have long known that the human brain generates certain error-related signals when it notices a mistake. They created machine-learning software that can recognize and classify those brain oops signals from individual human volunteers within 10 to 30 milliseconds—a way of creating instant feedback for Baxter the robot when it sorted paint cans and wire spools into two different bins in front of the humans.
The human volunteers wore electroencephalography (EEG) caps that can detect those oops signals when they see Baxter the robot making a mistake. Each volunteer first underwent a short training session where the machine-learning software learned to recognize their brains’ specific “oops” signals. But once that was completed, the system was able to start giving Baxter instant feedback on whether each human handler approved or disapproved of the robot’s actions.

It’s still far from a perfect system, or even a 90-percent accuracy system when performing in real time. But researchers seem confident based on the early trials.

The MIT and Boston University researchers also discovered that they could improve the system’s offline performance by focusing on stronger oops signals that the brain generates when it notices so-called “secondary errors.” These errors came up when the system misclassified the human brain signals by either falsely detecting an oops signal when the robot was making the correct choice, or when the system failed to detect the initial oops signal when the robot was making the wrong choice.

By incorporating the oops signals from secondary errors, researchers succeeded in boosting the system’s overall performance by almost 20 percent. The system cannot yet process the oops signals from secondary errors in actual live training sessions with Baxter. But once it can, researchers expect to boost the overall system accuracy beyond 90 percent.

The research also stands out because it showed how people who had never tried the EEG caps before could still learn to train Baxter the robot without much trouble. That bodes well for the possibilities of humans intuitively relying on EEG to train their future robot cars, robot humanoids or similar robotic systems.

How To Follow Fish using amazing tool

How To Follow Fish using amazing tool

Do you like to swim in the sea? and did you know how to follow the fish at deep sea? check this out

1. Jaw tags: Researchers use these external tags, about the size of a quarter, to identify whether a fish has an internal tracker implanted in it. It’s usually fit onto a fish’s lower mandible.

2. Pop-up satellite archival tag: This external tag collects detailed data on fish vitals, location and environmental information, such as light levels.

3. Acoustic transmitter: Internal or external tag that’s ideal for tracking fish in deeper water. These transmitters produce a unique set of pings that get assigned to an individual fish. To collect data, researchers either go out in a vessel to pick up signals or download the information from receivers stationed in the fish’s environment.

4. Cinch tags: This type of external tag indicates the fish bearing it is part of a study and lists contact information for the agency monitoring it. If recreational fishers reel in a catch with a cinch tag, they should report it to the agency listed on the tag.

5. Large-scale model of coded wire: Unlike the model shown here, these internal tags have a true diameter similar to that of mechanical pencil lead. The wire comes on a spool and is lined with imprinted numbers. When a biologist cuts off a piece to make a tag, a unique serial number will be paired with a fish. To read this number, researchers need magnification equipment.

6. T-bar tags: External tags that come in a variety of colors. Similar to cinch tags, T-bars flag the fish as part of a study.

7. Radio telemetry tag: Internal tag for tracking fish in shallower waters. Researchers use an antenna, either handheld or secured beneath a plane or boat, to pick up the tag’s radio signal.

8. Visual Implant (VI) tags: Internal tags mixed with brightly colored biocompatible substances that researchers implant into translucent sections of a fish. While it’s possible to spot them with the naked eye, researchers typically need fluorescent light or magnification to see VI tags. Different colors can indicate details such as the year a fish was tagged for study.

9. Hydrostatic tags: Much like cinch tags, these external tags flag the fish as being part of a study.

10. Passive integrated transponder (PIT) tags: An internal tag that biologists must scan to activate. PITs relay data on fish growth rates and movement patterns to a receiver.

Algortihm Can Choose Next Silicon Valley Unicorn

Algortihm Can Choose Next Silicon Valley Unicorn


In the world of venture capitalists, not everyone is Peter Thiel. The Silicon Valley investor reaped 1 billion dollars in 2012 when he cashed in his Facebook stocks, turning a 2,000 percent profit from his initial $500,000 investment. Stories like Thiel’s may be inspirational, but they are by far the outlier. The start-up world sees thousands of hopeful companies pass through each year. Only a fraction of those ever returns a profit.



Picking a winner, the elusive “unicorn,” is as much a matter of luck as it is hard numbers. Factors like founder experience, workplace dynamics, skill levels and product quality all matter, of course, but there countless other variables that can spell heartbreak for an aspirational young company. Successful venture capital firms claim to know the secret to success in Silicon Valley, but it can still be a harrowing game to play.

Chasing Unicorns

Humans just aren’t very good at objectively sorting through thousands of seemingly unrelated factors to pick out the subtle trends that mark successful companies. This kind of work, however, is where machine learning programs excel. Two researchers at MIT have developed a custom algorithm aimed at doing exactly that and trained it on a database of 83,000 start-up companies. This allowed them to sift out the factors that were best correlated with success — in this case, a company being acquired or reaching an IPO, both situations that pay off handsomely for investors.


What’s the Secret Recipe?

They found that one of the biggest predictors of success was how start-ups moved through rounds of funding. And it wasn’t the slow and steady companies that were hitting it big, it was the ones that moved most erratically, pausing at one level of funding and then rocketing through the next few. How this plays into start-up success isn’t completely understood at the moment though.

The researchers say that their algorithm could be applied to much more than just nascent tech companies. The same principles that allow it to pick a handful of winners from a crowd of dudes should also apply in areas as diverse as the pharmaceutical industry and the movie business, where just a few successes can pay out billions. These are fields where the top players are lionized for their ability to sniff out winners and reap the substantial rewards. As with factory workers, bank tellers and telemarketers, the robots could be coming for their jobs as well.