Saturday, September 5, 2015

[tt] NYT (Digital Burrito): Backpack Makers Rethink a Student Staple

This digital burrito looks great.

Backpack Makers Rethink a Student Staple


The inside of Alejandro Sarete's backpack is jammed with the objects
of a busy student life: smartphone, USB thumb drive, playing cards,
lip balm. Cho Young-Uk's shoulder bag is more minimalist in content:
Lenovo laptop and adapter.

Mr. Sarete and Mr. Cho, both students at New York University, have
something missing from their stashes: piles of textbooks.

"I don't really have to carry around textbooks anymore, like I used
to in high school," said Mr. Cho, a sophomore. All but two of his
classes--Spanish history and financial accounting--had moved the
coursework online.

"I think fewer people have them, for sure," Mr. Sarete said. "I
actually still like physical paper, but I'm an exception."

As students increasingly go back to school with gadgets instead of
textbooks, and no longer need huge backpacks to haul them around,
backpack makers in the $2.7 billion industry are rethinking not only
the perennial style of back-to-school packs but also the mission of
the ubiquitous carrying gear that for decades has been an annual
must-buy for students of all ages.

For clues, Eric Rothenhaus and his team at VF Corporation, the
apparel giant that owns a leading maker of backpacks, JanSport,
sought the advice of some extreme backpack users. They studied
mountaineers whose lives can depend on their gear. They talked to
the homeless in San Francisco, who live out of shopping carts.

And they visited campuses to observe the habits and habitats of
college students, who buy many of the eight million backpacks
JanSport sells each year. "We realized we needed to forget
everything we knew about the category," said Mr. Rothenhaus,
director of research and design at JanSport.

"We went out to the streets of New York, to San Francisco, to
college campuses, and we started to ask: What are the things we
carry with us? How do we carry them? And how is that changing?"

Americans bought more backpacks than ever last year--174 million of
them, according to the Travel Goods Association. The bulk of these
were purchased during back-to-school shopping season, typically the
second-largest sales season for retailers and an important
bellwether for year-end holiday sales.

But there is ample hand-wringing within the industry that it is not
keeping up with the times. After growing at a fast clip over the
last decade, as offices grew more casual and men increasingly
switched from briefcases to backpacks, the market for backpack sales
in the United States is expected to grow just 3.9 percent this year,
according to data from Euromonitor International. That is down from
9 percent five years ago.

"The market for backpacks is becoming saturated and is nearing its
peak," Ayako Homma, a Euromonitor research analyst, wrote in an
email. Consumers, she said, are "looking for something new and

Those concerns add to pessimism over the entire late-summer shopping
season. Consumers will spend about 6 percent less on back-to-school
purchases compared with a year ago, the National Retail Federation
predicts, in part because there are few new "must have" electronics
so far this year.

JanSport has come up with a wrap-style pouch for cords and adapters
that it calls a Digital Burrito. Credit Jim Wilson/The New York

In backpacks, too, experts say there is a dearth of hits. They say
innovation has stalled in a market dominated by VF, which also owns
the Eastpak, Timberland and North Face brands and controls 55
percent of backpack sales in the United States. Many packs on
students' backs as they go back to school this week are largely
indistinguishable from those their parents carried.

"I think there's room in the market for something new," said Lindsey
Shirley, a clothing and textiles expert at Utah State University who
is developing a new degree in outdoor product design to address a
perceived shortfall of fresh talent in the field. "There's
definitely room for innovation."

Some of that innovation has come from newer start-ups, aimed at
specific groups of consumers. There is the $170 Tylt Energi+ with a
built-in battery charger and cables that charge up to three devices
at a time. There is the $1,450 Black Diamond JetForce Pack, fitted
with a fan-driven airbag system to increase the chances of survival
in an avalanche.

At the other end of the spectrum are retro-look backpacks from an
upstart, Herschel Supply, which pair mountaineering straps with
laptop pockets and have been a rare recent hit.

John Sears, vice president for design and development at the
backpack maker Gregory Mountain Products, said that today's digital
lifestyles were an opportunity, not a threat, because people were
hauling around so much digital gear. Gregory will soon sell new
backpacks with easily removable solar panels, designed for multiday
adventures in the back country.

"People want to stay more connected socially, even in the outdoors,"
he said. "Even if they're in the back country, they're using solar
panels, GoPros, GPS."

JanSport, which first started selling backpacks nearly a
half-century ago in Seattle, has taken a page out of the start-up
playbook, teaming up with Ideo, a design firm and Silicon Valley
darling, to reimagine the backpack. The extreme mountaineers they
interviewed demonstrated how they kept their most important gear at
the top of the bag--lights, food, beacons--all sealed in Ziploc
bags and away from damaging moisture.

The team found similar strategies among the homeless on the streets
of San Francisco. The top layer of their shopping carts was lined
with dispensable knickknacks. But they stored their valuable
possessions, like money and food, in backpacks, easily accessible.

What interested Mr. Rothenhaus was that both groups developed
meticulous and personalized packing strategies that they honed over
time. They also valued the same backpack qualities: that they be
water-resistant and have areas that were quickly accessible, yet be
simple enough to meet a range of needs and packing methods.

"Packing--everybody has a system and methodology for it," Mr.
Rothenhaus said. "Our research steered us away from trying to design
pockets and compartments for specific uses.
We didn't want to overengineer. We wanted to give people options."

The team then looked at their findings through the lens of average
users like college students, for whom smartphones, not beacons, are
survival tools. Many of their needs were similar. Water-resistance,
it turned out, was as important to heavy users of smartphones as it
was to mountaineers.

They also wanted flexibility, but they needed a little help with
organization, Mr. Rothenhaus's team realized, as they watched
students pull chargers and cords from their bags in a jumbled mess.
("I have a very cluttered mind," Mr. Sarete, the N.Y.U. student,

JanSport created an easily accessible "V-loft" pocket that sits on
top of the bag, big enough for a phone, small tablet and other

The brand's rethinking inspired a new line of wrap-style pouches--
which JanSport calls Digital Burritos--that help users gather up
the cords and adapters. Designers waterproofed compartments to let
users throw a burrito or soda or gym shoes into their backpacks,
without worrying about their tablets or laptops.

And JanSport homed in on some promising signs. Though students might
not be carrying textbooks, their lives were lived increasingly on
the go, their mobility helped by smartphones and Wi-Fi. JanSport
will focus on tailoring its bags to lifestyles lived shuttling among
what the team calls "third spaces"--the cafe tables or park
benches that become impromptu work spaces--and studying how
backpacks might fit in.

"We used to get up in the morning and go somewhere, go from A to B,
then back to A. But now our spaces are little circles," Mr.
Rothenhaus said. "We might be outside in a park, working. We might
be eating at food trucks."

"When you need to be on the go," he said, "you need a backpack."
tt mailing list

[tt] NYT: Choosing the Best Smartphone Plan for You

What percent of readers of this list live in the United States?

Choosing the Best Smartphone Plan for You


THE American wireless industry is increasingly redefining the word
"simple" in the same way that the food industry rendered the word
"natural" absurd.

Consider that when you pick up "natural" pancake syrup from the
grocery store, chances are that one of the listed ingredients will
be "natural flavoring"--an oxymoron.

Similarly, when you shop for a wireless phone plan today, chances
are that the carrier's marketing contains the word "simple." But
when you browse the numerous options and fees, you'll find they are
anything but. Instead, we may be hitting peak complexity with phone

"Never before has the pricing been so complicated with all the
carriers," said Toni Toikka, whose research firm Alekstra analyzes
wireless bills and who recently created a giant spreadsheet of phone
plans. "This is the first time in carrier history that the carriers
have been able to build that kind of a maze that even I get really

So we worked to boil down which plans are worth paying attention to,
especially as consumers may be shopping for new phones and carrier
plans after Apple unveils new iPhones on Sept. 9. The main takeaway:
In terms of price and network quality, AT&T has the best deals right
now for both individual subscribers and families.

To see how we arrived at that conclusion, it's worth understanding
how once relatively straightforward phone plans became complicated
ones with all manner of fees and choices.

Death of 2-Year Contract

The big change in wireless plans began with the departure from the
two-year contract, which was a standard plan several years ago.
Under the contract, people paid a one-time subsidized price for a
cellphone upfront, and then a monthly fee for a wireless plan,
including minutes, text messages and data. After two years, you
would be eligible to buy a new phone at a subsidized rate and you
would continue to pay the same monthly wireless fee, whether you
upgraded to a new phone or not.

In 2013, T-Mobile said it would abandon the two-year contract to
make phone plans cheaper and more transparent, while eliminating
annoying charges like termination fees. T-Mobile instead chose to
break out the full cost of a phone from the cost of the data plan.
AT&T, Verizon Wireless and Sprint soon followed suit.

"We're finally seeing companies move away from these one-way,
unfair, nontransparent contracts," Mike Sievert, the chief operating
officer of T-Mobile, said in an interview.

The new plans are called contract-free or "equipment installment
plans," which typically have four main costs: the price of the data
plan, the cost of the device spread over monthly installments, the
activation fee for each phone and the monthly cost for each phone
line (also called the network-access fee).

Prices vary by carrier, and over time, the fees can shift. Once you
finish paying off the cost of the phone, for example, that fee is
removed from your bill and you pay a lower monthly rate for your
phone bill. In some cases, you can choose to upgrade to a new device
when you've paid off the phone.

For a flavor of the complexity of these plans, look at Verizon's
three-gigabyte contract-free plan. It costs $45 a month for the
data. Each smartphone added to the plan is $20 a month. There is an
equipment payment plan to pay off the smartphone over two years--
the iPhone 6 costs $650, or $27 a month spread over two years.
Altogether, you pay at least $92 a month. After paying off the
device, you would subtract the $27 equipment charge from your
monthly bill.

Contract-Free Plans

One aspect of contract-free plans that consumers should be aware of:
Some can actually cost more than a two-year contract plan.

Take an individual AT&T customer who has an iPhone 6 with five
gigabytes of data on a two-year contract. Over the two years, the
customer would pay about $2,405 total ($200 upfront for the iPhone,
$50 a month for the data plan, a $45 activation fee and $40 a month
for the network access fee).

For someone who chooses the five-gigabyte plan on AT&T's Next 18
installment plan, which spreads phone payments over two years, the
customer would pay $2,465 over two years ($27.09 a month for the
iPhone, a $15 activation fee, $25 a month for the network access fee
and $50 a month for the data plan).

The difference in price comes from the monthly rates and activation
fees. The monthly rate for the installment plan is higher than the
monthly rate for the two-year contract, whereas the activation fee
is higher on the two-year contract than the contract-free plan.

Some consumers also don't hang on to their device long enough to
enjoy the monthly price drop that happens when they pay off their
device fee in a contract-free plan, which typically happens two
years into owning a device. That's because they then immediately
upgrade to another device instead of taking advantage of the data
plan's price decline.

"Do you know anyone who keeps their handsets for three years?" Mr.
Toikka asked, adding that it makes little sense for consumers to pay
off their phones in less than two years, since it costs nothing
extra to spread the payments out.

Then there are times when the contract-free plans are better deals
than the two-year contract. Take a family of four, each with an
iPhone 6, on a 15-gigabyte data plan and on a two-year contract with
AT&T, who would pay $7,240 over two years. On a contract-free plan
like AT&T's Next 18, they would pay $6,501 over two years.

What Pricing Plan to Get for an iPhone 6

We waded through the endlessly bewildering options and found the
best bets.

All of this is hardly intuitive. Glenn Lurie, AT&T Mobility's chief
executive, suggested that the plans' designs were driven by the
desire to give consumers options. "We obviously want to give our
customers choice, and it's really, really important that we do

But when I asked whether consumers might be confused, he said, "We
simplified significantly, so no."

The Bottom Line

After comparing phone plans, Mr. Toikka determined that AT&T's
current plans offer the best value for individuals and families--
though there are caveats.

T-Mobile and Sprint offer the lowest prices. But Mr. Toikka
recommended AT&T or Verizon because of their broader network
coverage and because they offer similar pricing to each other. Where
AT&T wins is with a benefit called rollover data, which lets unused
data from one month be rolled over into the next month, giving
consumers more value over time.

Press officers for Verizon, Sprint and T-Mobile cited their own data
points. Verizon says third-party metrics had shown its network was
faster and more reliable than AT&T's, while Sprint said of its
network that "improvements are real and happening." T-Mobile says
its network coverage has grown and its plans offer more data, among
other benefits.

Among AT&T's plans, Mr. Toikka recommends that a family of four pick
AT&T's Next 18 contract-free plan with 15 gigabytes of data. For
individuals, he suggests AT&T's Next 18 contract-free plan with five
gigabytes of data. (AT&T's two-year contract with five gigabytes for
an individual is barely cheaper, so it's worth paying a little extra
to stay off contract, he said.)

He picked those based on pricing, the average amount of data used by
a consumer (roughly three gigabytes a month) and the average
smartphone upgrade cycle of two years. In other words, those plans
should have enough data for most people and also allow for device
upgrades every two years.

There are caveats. For one, carriers change their prices frequently,
and many of the prices are limited-time offers.

There is also no one-size-fits-all phone plan, because it depends on
the quality of each carrier's network in your area, how much data
you consume and the number of lines on the plan.

So consumers should study their phone bills to calculate the average
amount of data they used over the last year. They should look up
each carrier's coverage maps online to see if the network has strong
coverage where they live or work. Then they should work out the math
and the cost over two years.

"Selecting a cellphone plan is the most complicated financial
transaction that a consumer will ever make," Mr. Toikka said.
tt mailing list

Friday, September 4, 2015

[tt] NYT: Google's Driverless Cars Run Into Problem: Cars With Drivers

Google's Driverless Cars Run Into Problem: Cars With Drivers


MOUNTAIN VIEW, Calif.--Google, a leader in efforts to create
driverless cars, has run into an odd safety conundrum: humans.

Last month, as one of Google's self-driving cars approached a
crosswalk, it did what it was supposed to do when it slowed to allow
a pedestrian to cross, prompting its "safety driver" to apply the
brakes. The pedestrian was fine, but not so much Google's car, which
was hit from behind by a human-driven sedan.

Google's fleet of autonomous test cars is programmed to follow the
letter of the law. But it can be tough to get around if you are a
stickler for the rules. One Google car, in a test in 2009, couldn't
get through a four-way stop because its sensors kept waiting for
other (human) drivers to stop completely and let it go. The human
drivers kept inching forward, looking for the advantage--
paralyzing Google's robot.

It is not just a Google issue. Researchers in the fledgling field of
autonomous vehicles say that one of the biggest challenges facing
automated cars is blending them into a world in which humans don't
behave by the book. "The real problem is that the car is too safe,"
said Donald Norman, director of the Design Lab at the University of
California, San Diego, who studies autonomous vehicles.

The company has begun building a fleet of experimental
electric-powered vehicles that can't be driven by people and are
summoned with a smartphone app.
By Google on Publish Date May 27, 2014. Watch in Times Video »

"They have to learn to be aggressive in the right amount, and the
right amount depends on the culture."

Traffic wrecks and deaths could well plummet in a world without any
drivers, as some researchers predict. But wide use of self-driving
cars is still many years away, and testers are still sorting out
hypothetical risks--like hackers--and real world challenges,
like what happens when an autonomous car breaks down on the highway.

For now, there is the nearer-term problem of blending robots and
humans. Already, cars from several automakers have technology that
can warn or even take over for a driver, whether through advanced
cruise control or brakes that apply themselves. Uber is working on
the self-driving car technology, and Google expanded its tests in
July to Austin, Tex.

Google cars regularly take quick, evasive maneuvers or exercise
caution in ways that are at once the most cautious approach, but
also out of step with the other vehicles on the road.

"It's always going to follow the rules, I mean, almost to a point
where human drivers who get in the car and are like 'Why is the car
doing that?'" said Tom Supple, a Google safety driver during a
recent test drive on the streets near Google's Silicon Valley

Since 2009, Google cars have been in 16 crashes, mostly
fender-benders, and in every single case, the company says, a human
was at fault. This includes the rear-ender crash on Aug. 20, and
reported Tuesday by Google. The Google car slowed for a pedestrian,
then the Google employee manually applied the brakes. The car was
hit from behind, sending the employee to the emergency room for mild

Google's report on the incident adds another twist: While the safety
driver did the right thing by applying the brakes, if the autonomous
car had been left alone, it might have braked less hard and traveled
closer to the crosswalk, giving the car behind a little more room to
stop. Would that have prevented the collision? Google says it's
impossible to say.

There was a single case in which Google says the company was
responsible for a crash. It happened in August 2011, when one of its
Google cars collided with another moving vehicle. But, remarkably,
the Google car was being piloted at the time by an employee. Another
human at fault.

Humans and machines, it seems, are an imperfect mix. Take lane
departure technology, which uses a beep or steering-wheel vibration
to warn a driver if the car drifts into another lane. A 2012
insurance industry study that surprised researchers found that cars
with these systems experienced a slightly higher crash rate than
cars without them.

Bill Windsor, a safety expert with Nationwide Insurance, said that
drivers who grew irritated by the beep might turn the system off.
That highlights a clash between the way humans actually behave and
how the cars wrongly interpret that behavior; the car beeps when a
driver moves into another lane but, in reality, the human driver is
intending to change lanes without having signaled so the driver,
irked by the beep, turns the technology off.

Mr. Windsor recently experienced firsthand one of the challenges as
sophisticated car technology clashes with actual human behavior. He
was on a road trip in his new Volvo, which comes equipped with
"adaptive cruise control." The technology causes the car to
automatically adapt its speeds when traffic conditions warrant.

But the technology, like Google's car, drives by the book. It leaves
what is considered the safe distance between itself and the car
ahead. This also happens to be enough space for a car in an
adjoining lane to squeeze into, and, Mr. Windsor said, they often

Dmitri Dolgov, head of software for Google's Self-Driving Car
Project, said that one thing he had learned from the project was
that human drivers needed to be "less idiotic."

On a recent outing with New York Times journalists, the Google
driverless car took two evasive maneuvers that simultaneously
displayed how the car errs on the cautious side, but also how
jarring that experience can be. In one maneuver, it swerved sharply
in a residential neighborhood to avoid a car that was poorly parked,
so much so that the Google sensors couldn't tell if it might pull
into traffic.

More jarring for human passengers was a maneuver that the Google car
took as it approached a red light in moderate traffic. The laser
system mounted on top of the driverless car sensed that a vehicle
coming the other direction was approaching the red light at
higher-than-safe speeds. The Google car immediately jerked to the
right in case it had to avoid a collision. In the end, the oncoming
car was just doing what human drivers so often do: not approach a
red light cautiously enough, though the driver did stop well in

Courtney Hohne, a spokeswoman for the Google project, said current
testing was devoted to "smoothing out" the relationship between the
car's software and humans. For instance, at four-way stops, the
program lets the car inch forward, as the rest of us might,
asserting its turn while looking for signs that it is being allowed
to go.

The way humans often deal with these situations is that "they make
eye contact. On the fly, they make agreements about who has the
right of way," said John Lee, a professor of industrial and systems
engineering and expert in driver safety and automation at the
University of Wisconsin.

"Where are the eyes in an autonomous vehicle?" he added.

But Mr. Norman, from the design center in San Diego, after years of
urging caution on driverless cars, now welcomes quick adoption
because he says other motorists are increasingly distracted by
cellphones and other in-car technology.

Witness the experience of Sena Zorlu, a co-founder of a Sunnyvale,
Calif., analytics company, who recently saw one of Google's
self-driving cars at a red light in Mountain View. She could not
resist the temptation to grab her phone and take a picture.

"I don't usually play with my phone while I'm driving. But it was
right next to me so I had to seize that opportunity," said Ms.
Zorlu, who posted the picture to her Instagram feed.

[tt] Science Daily: Musical tastes offer a window into how you think

Musical tastes offer a window into how you think

Date: July 22, 2015
Source: University of Cambridge

Summary: Do you like your jazz to be Norah Jones or Ornette Coleman,
your classical music to be Bach or Stravinsky, or your rock
to be Coldplay or Slayer? The answer could give an insight
into the way you think, say researchers.


Do you like your jazz to be Norah Jones or Ornette Coleman, your
classical music to be Bach or Stravinsky, or your rock to be
Coldplay or Slayer? The answer could give an insight into the way
you think, say researchers from the University of Cambridge.

In a study published today in the journal PLOS ONE, a team of
psychologists show that your thinking style--whether you are an
'empathizer' who likes to focus on and respond to the emotions of
others, or a 'systemizer' who likes to analyse rules and patterns in
the world--is a predictor of the type of music you like.

Music is a prominent feature of everyday life and nearly everywhere
we go. It's easy for us to know what types of music we like and
don't like. When shuffling songs on an iPod, it takes us only a few
seconds to decide whether to listen or skip to the next track.
However, little is known about what determines our taste in music.

Researchers over the past decade have argued that musical
preferences reflect explicit characteristics such as age and
personality. For example, people who are open to new experiences
tend to prefer music from the blues, jazz, classical, and folk
genres, and people who are extraverted and 'agreeable' tend to
prefer music from the pop, soundtrack, religious, soul, funk,
electronic, and dance genres.

Now a team of scientists, led by PhD student David Greenberg, has
looked at how our 'cognitive style' influences our musical choices.
This is measured by looking at whether an individual scores highly
on 'empathy' (our ability to recognize and react to the thoughts and
feelings of others) or on 'systemizing' (our interest in
understanding the rules underpinning systems such as the weather,
music, or car engines)--or whether we have a balance of both.

"Although people's music choices fluctuates over time, we've
discovered a person's empathy levels and thinking style predicts
what kind of music they like," said David Greenberg from the
Department of Psychology. "In fact, their cognitive style--whether
they're strong on empathy or strong on systems--can be a better
predictor of what music they like than their personality."

The researchers conducted multiple studies with over 4,000
participants, who were recruited mainly through the myPersonality
Facebook app. The app asked Facebook users to take a selection of
psychology-based questionnaires, the results of which they could
place on their profiles for other users to see. At a later date,
they were asked to listen to and rate 50 musical pieces. The
researchers used library examples of musical stimuli from 26 genres
and subgenres, to minimise the chances that participants would have
any personal or cultural association with the piece of music.

People who scored high on empathy tended to prefer mellow music
(from R&B, soft rock, and adult contemporary genres), unpretentious
music (from country, folk, and singer/songwriter genres) and
contemporary music (from electronica, Latin, acid jazz, and Euro
pop). They disliked intense music, such as punk and heavy metal. In
contrast, people who scored high on systemizing favoured intense
music, but disliked mellow and unpretentious musical styles.

The results proved consistent even within specified genres:
empathizers preferred mellow, unpretentious jazz, while systemizers
preferred intense, sophisticated (complex and avant-garde) jazz.

The researchers then looked more in-depth and found those who scored
high on empathy preferred music that had low energy (gentle,
reflective, sensual, and warm elements), or negative emotions (sad
and depressing characteristics), or emotional depth (poetic,
relaxing, and thoughtful features). Those who scored high on
systemizing preferred music that had high energy (strong, tense, and
thrilling elements), or positive emotions (animated and fun
features), and which also featured a high degree of cerebral depth
and complexity.

David Greenberg, a trained jazz saxophonist, says the research could
have implications for the music industry. "A lot of money is put
into algorithms to choose what music you may want to listen to, for
example on Spotify and Apple Music. By knowing an individual's
thinking style, such services might in future be able to fine tune
their music recommendations to an individual."

Dr Jason Rentfrow, the senior author on the study says: "This line
of research highlights how music is a mirror of the self. Music is
an expression of who we are emotionally, socially, and cognitively."

Professor Simon Baron-Cohen, a member of the team, added; "This new
study is a fascinating extension to the 'empathizing-systemizing'
theory of psychological individual differences. It took a talented
PhD student and musician to even think to pose this question. The
research may help us understand those at the extremes, such as
people with autism, who are strong systemizers."

Based on their findings, the following are songs that the
researchers believe are likely to fit particular styles:

High on empathy
* Hallelujah--Jeff Buckley
* Come away with me--Norah Jones
* All of me--Billie Holliday
* Crazy little thing called love--Queen

High on systemizing
* Concerto in C--Antonio Vivaldi
* Etude Opus 65 No 3--Alexander Scriabin
* God save the Queen--The Sex Pistols
* Enter the Sandman--Metallica

Journal Reference:
1. David M. Greenberg, Simon Baron-Cohen, David J. Stillwell,
Michal Kosinski, Peter J. Rentfrow. Musical Preferences are
Linked to Cognitive Styles. PLOS ONE, 2015; 10 (7): e0131151
DOI: 10.1371/journal.pone.0131151
tt mailing list

[tt] NS 3036: Hormones boost placebo effect by making you want to cooperate

NS 3036: Hormones boost placebo effect by making you want to cooperate
24 August 2015

A placebo can make you feel a little better - and now we know how to
boost the effect. Drugs based on hormones that make us more
cooperative seem to enhance the placebo effect. The finding could
lead to changes in the way some trials are performed.

Sometimes a sugar pill can be all you need, even when you know it
doesn't contain any medicine. We're still not entirely sure why. The
brain's natural painkillers, such as dopamine and opioids, seem to
be involved, but other factors may be at work too. Evidence that a
compassionate, trustworthy carer can speed recovery suggests that
there is also a social dimension to the placebo effect.

"This interaction between the patient and care provider seems to be
based on a more complex system," says Luana Colloca at the
University of Maryland in Baltimore.

Hormones that modulate our social behaviour might play a role. Last
year, a team led by Ulrike Bingel of the University Duisburg-Essen
in Germany, found that oxytocin - the so-called "cuddle chemical"
that is thought to help us trust, bond and form relationships -
seems to boost the placebo effect, at least in men.

In the study, Bingel's team applied an inert ointment to the arms of
male volunteers. Half of them were told that the cream would reduce
the degree of pain caused by the painfully hot stimulus subsequently
applied. Men who were told that they were receiving pain relief said
that the heat was less painful than those who knew that the cream
was inert. When oxytocin was squirted up volunteers' noses, the men
reported being in even less pain. The team didn't test oxytocin in

Trust issues

Colloca wondered if another hormone - vasopressin - might have a
similar effect. Vasopressin has also been linked to trust and
commitment to relationships. "We know that receptors for oxytocin
and vasopressin are in very similar areas of the brain," says

To find out, her team administered moderately painful electric
shocks to the fingers of 109 men and women. The intensity of the
shock was tailored to each individual so that they all reported the
same, moderate level of pain. The participants were also told that
each time they saw a green light, the electric shock would be
reduced, but that it would be kept at the same level when a red
light was displayed. In reality, the level of intensity never

In addition, each participant was given a sniff of either
vasopressin, a placebo of salt water, a very low dose of oxytocin or
nothing at all.

Colloca's team compared the pain ratings of the volunteers when they
were shown a red light to the scores given when they saw a green
light. Any difference represents a placebo effect, says Colloca. All
of the volunteers experienced the placebo effect, but it was more
significant in women given vasopressin.

This makes sense, says Colloca. Previous research suggests that
while vasopressin seems to promote aggression and rivalry between
men, it encourages "tend-and-befriend" tendencies among women.
Colloca administered the treatments herself, and although the women
didn't outwardly behave any differently, , she thinks that the women
given vasopressin probably felt more at ease, and were cooperative
and trusting of her as a care provider.

"It is remarkable," says Rene Hurlemann at the University of Bonn in
Germany. He thinks that hormones like vasopressin may be responsible
for the placebo effects seen in some clinical trials.

Trial and improvement

"Many clinical trials for drugs for depression have struggled to
produce results that are better than placebo," he says. "Some say
that antidepressants don't work, but I think that's nonsense."
Instead, hormone systems may be altered in some diseases like
depression. "This has been completely overlooked in medical trials,"
says Hurlemann.

In future, researchers might be able to find ways to block the
effects of hormones like oxytocin and vasopressin in clinical
trials, or at least factor them in, and put a stop to the potential
skewing of trial results.

As drugs, the hormones could also be used to enhance the effects of
other medicines - but it might be easier to work on improving the
environment a person is in when they are receiving medical
treatment, says Tor Wager at the University of Colorado at Boulder.
"We like to think that a drug does one thing, but the context in
which it is given can modify its effects," he says.

The social aspects of medical treatment appear to be especially
important, says Wager. In his own research, Wager says he has found
that providing people with positive or negative information about
others changes their perception of pain - to a greater degree than
the typical measures of placebo effect. "There seems to be something
special about social feedback," he says.

Colloca hopes the hormones can be applied in treatments for chronic
pain. "We know the best care providers interact well with their
patients," she says. "It might be possible to help a person control
their pain by enhancing this cooperation response."

Journal reference: Biological Psychiatry,

By Jessica Hamzelou
tt mailing list

[tt] NS 3036: More than 100 billion billion Earth-like planets might exist

Is there intelligent life on Earth?

NS 3036: More than 100 billion billion Earth-like planets might exist
26 August 2015

YOUR existence is unbelievably unlikely. Think of everything that
happened for you to be born: your parents met, a particular sperm
fertilised a particular egg, ultimately giving rise to the specific
sequence of genes that is you.

But if it hadn't happened that way, someone else would be reading
this in your place. We're unique, but that doesn't make us special:
there are 7 billion other humans on the planet. Now, thanks to a
glut of data on planets in other star systems, astronomers are
starting to realise the same is true of Earth itself.

Researchers have discovered nearly 2000 exoplanets so far, the
majority found by NASA's Kepler space telescope. Getting a better
picture of our galactic neighbours helps put our solar system into
context, says Peter Behroozi of the Space Telescope Science
Institute in Baltimore, Maryland. "Kepler has been fantastic for
setting some of the limits on how many planets are likely to be
found around stars, especially for Earth-like planets."

Behroozi and his colleague Molly Peeples have combined the latest
exoplanet statistics with our understanding of how galaxies form
stars. The result is a formula that tracks the growth in the number
of planets in the universe over time (

It suggests there are currently 10^20, or 100 billion billion,
Earth-like planets in the universe, with an equivalent number of gas
giants. "Earth-like" doesn't mean an exact replica of our planet,
but rather a rocky world that, if blanketed by a suitable
atmosphere, would hold liquid water on its surface. Applied to the
solar system, this definition would include Mars and Venus but not
Mercury or the moon.

And that's just the start. Only a fraction of the gas within all the
galaxies in the cosmos has cooled enough to start collapsing, so
stars and planets will continue forming for billions of years. That
means 92 per cent of the universe's Earth-like planets won't exist
until long after the sun has died and taken the Earth with it.

"Over 90 per cent of Earth-like planets have yet to form, and won't
until long after the sun dies"

"Philosophically, if you want to know our place in the universe as a
whole, then you also need to include what will happen in the
future," says Behroozi. "I didn't expect to find the Earth had
formed so early."

Figuring out where we fit in the grand cosmic timeline also gives us
an idea of how many other civilisations might be out there. Suppose
intelligent life is so rare that Earth is the first planet in the
universe to evolve a civilisation - an almost ludicrously
conservative assumption. Then the sheer number of future Earth-like
planets means that the likelihood of us being the only civilisation
the universe will ever have is at most 8 per cent.

If we find just one other inhabited planet in the Milky Way, the
number of other such worlds rockets up. Such a discovery, together
with the unlikeliness of our galaxy being the only one to host life,
would make Earth at least the 10 billionth civilisation in the
universe at present.

Despite this abundance of other Earths, the odds of there being an
exact carbon copy of our world are so low as to be impossible, just
as the genetic lottery will never produce your exact twin in another
family. But as we are now realising, that's in part due to the
precariousness of our existence in the first place.

The basic recipe for a solar system reads: take one giant gas cloud
and leave it to collapse under gravity for a good few million years,
until it becomes a newborn star surrounded by a spinning disc of
leftover gas and dust. Wait patiently for another few million years
as this clumps up to form planets.

But the details of how this process plays out are still a mystery.
One way to study it is through computer simulations of swirling
particles. Set them up with different initial conditions, and rate
how closely the resulting planets match what we find in our solar

It turns out that even near-identical starts lead to different
outcomes, as Volker Hoffmann at the University of Zurich,
Switzerland, and his colleagues have discovered. They simulated a
solar system in the middle of forming, starting with a gas disc
containing 2000 planetismals - lumps each around 4 per cent of the
moon's mass.

Previous work shows that Jupiter and Saturn formed earlier than the
other planets, and that their gravity had a large influence on the
rest of the solar system. So Hoffmann's team also simulated systems
with two gas giants in two different types of orbit.

The team ran all three scenarios 12 times with slightly different
initial conditions, each simulation requiring a month of computing
time. They found that if just one planetesimal was moved by a
millimetre, an entirely different set of worlds emerged

This was surprising, Hoffmann says, but it makes sense because the
planetesimals can interact in many ways, so the situation is
inherently chaotic. "It's the same issue that happens in weather and
climate modelling," he says. "You change the initial condition a
little bit, and the system goes somewhere else."

Gas giant helpers

The consequences are startling. Even smaller changes can probably
give rise to the same chaotic effects, says Hoffmann, so adding just
one extra molecule to the early solar system could mean the Earth
never formed.

"Adding even one extra molecule to the early solar system could mean
the Earth never formed"

But, slightly counter-intuitively, the simulated solar systems end
up looking quite similar, the exceptions being the simulations with
no gas giants. These end up with around 11 rocky planets, most of
which are less than half the mass of Earth.

Stick the gas giants in, and you get around four rocky planets
ranging from half an Earth mass to a little more massive than Earth
- a pretty good match for our own solar system.

So even if that extra molecule had been floating about, the result
wouldn't have looked that different. "Something like Earth would
probably have come up, and maybe something alive would have
developed," says Hoffmann. "But not us."

Don't let this desolate randomness get you down, says Rebecca Martin
of the University of Nevada, Las Vegas. "It's exciting that we're
not special," she says, because it means life is abundant in the
universe and we can go looking for it.

Working with Mario Livio at the Space Telescope Science Institute,
Martin has compared data on the known exoplanets, plus likely but
unconfirmed candidates seen by Kepler, with planets in our solar
system. Their aim was to see if anything stood out as unusual

For the most part, ours is a bog-standard system, but a few aspects
raise an eyebrow. Mercury is our innermost neighbour, yet it lies
beyond the average orbital distance for exoplanets, suggesting our
solar system is unusually stretched-out.

Alternatively, that could be down to observational biases. We
typically detect exoplanets by the dimming or wobbliness their orbit
produces in their star's light, and it is much easier to spot this
the closer in a planet is, because we have to wait less time for
them to complete an orbit. Recent work also suggests Mercury might
be the sole survivor of a pile-up between close-in worlds, which
could explain the lack of planets nearer the sun.

Our solar system also lacks super-Earths, objects more massive than
Earth but lighter than Uranus or Neptune. They can be rocky or
gassy, depending on their density, and seem to be very common in
other star systems. So why don't we have one?
More than 100 billion billion Earth-like planets might exist

Not quite Earth's twin, but close (Image: NASA/AMES/JPL-Caltech)

For now it's a mystery, but one possibility is that super-Earths are
bad for life. A drifting super-Earth can suck up rocky material that
would otherwise form an Earth-like world, leaving behind planets in
the star's habitable zone that are roughly Earth-sized but don't
look much like our home - they're less rocky and more gassy. If that
had happened here, we wouldn't be around to notice, so perhaps our
very existence precludes our having a super-Earth for a neighbour.

Taken together, all this research into exoplanets may herald another
Copernican-like revolution, as we realise just how mundane Earth is.
"Our papers are pointing to the fact that we're really not that
special," says Martin. "When you look at the exoplanetary systems
you might think we are quite different, but in the grand scheme of
things, we're not."

It might seem difficult to accept. But the upside is we're almost
certainly not alone - and if we are, that tells us something about
the conditions governing the evolution and destruction of life, says
Behroozi. "I personally believe that it's very unlikely we are the
only ones," he says; aliens must be out there. "Will they wonder
this question themselves, about how unique they are?" Maybe one day
we'll be able to compare notes.

By Jacob Aron
tt mailing list

[tt] NS 3036: Diagnosis without doctors: Deep learning to transform medicine

NS 3036: Diagnosis without doctors: Deep learning to transform medicine
26 August 2015

IT'S a case of diagnosis without doctors. Software could soon be
working out what's wrong with you based only on medical data.

Machines have already transformed healthcare. MRI scanners can peer
inside the body, for example, and blood samples are analysed
automatically, but human skill has always been an integral part of
the process: a scan reveals a shadow - the oncologist recognises its
significance. But doctors are often busy and overworked; they can
make mistakes or overlook telltale symptoms. If computers could
understand health on their own terms, perhaps they could speed up
diagnosis and even make it more accurate.

Central to the new approach are advances in machine learning - the
way software can be trained to recognise important features in an
image, for example. It's a powerful tool, but generally the software
requires a lot of virtual hand-holding: images might have to be
carefully aligned, and human experts are needed to make sure the
software is trained to recognise the right features.

Deep learning is more flexible. Here, the software works at multiple
levels simultaneously. Given a simple image, the computer might
process the edges and lines while also considering what the image as
a whole portrays - "dog" or "cat", say. The approach means deep
learning can make inferences about sets of data containing quite
different concepts without human guidance.

This could be a game-changer for medicine, says Andrew Bradley at
the University of Queensland in Australia. "[It can] readily combine
images obtained from multiple views and multiple modalities," he

Take breast cancer detection. Diagnosis potentially requires
information from three sources: an X-ray, an MRI scan and ultrasound
- and cross-referencing is laborious and time-consuming. Not with
deep learning. Bradley and colleagues have a prototype system that
cross-references automatically. They will present it in October at
the International Conference on Medical Image Computing and Computer
Assisted Intervention in Munich, Germany.

"Deep learning software automatically cross-references information
from several sources"

Researchers at Tel Aviv University in Israel have been using deep
learning to analyse chest X-rays. So far, their system can
distinguish between enlarged hearts and fluid build-up around the

Meanwhile, a group at the National Institutes of Health Clinical
Center in Bethesda, Maryland, is using similar methods to detect
cancerous growths on the spine. Both groups are getting results that
equal or better existing state-of-the-art detection algorithms.

But will doctors - or patients - ever accept the word of a machine?
That remains a problem, says Bradley. Deep learning's complex
networks are inscrutable, spitting out conclusions without giving

For instance, if you have ever had Facebook suggest you tag someone
you don't know as one of your friends, not even a Facebook engineer
could tell you why that happened. Apply that level of mystery to
medicine and, understandably, people may well get uneasy.

"Give them a black box? The clinicians are never going to embrace
that," Bradley says. That's why he has a second system. Once the
deep neural network is trained, Bradley uses its outputs to train
another, transparent model - a "white box" whose answers humans can
inspect and understand, and which will fail in certain
circumstances. "In traditional systems the expert will build in
sanity checks," he says. "Is the thing the right size, colour,
place? If not then don't go further."

We can expect deep learning to have an impact on medicine, says
Brendan Fray at the University of Toronto, Canada, particularly with
the rise of personalised healthcare and the focus on genes. His
startup, Deep Genomics, is bringing the approach to genetic

"Deep learning will transform personalised medicine, genetic testing
and pharmaceutical development," he says. "It provides the glue
between data and medical outcomes."

Leader: "Smart machines may discover things we can't, but we still
matter" [added at the end]

By Hal Hodson

tt mailing list