Friday, December 19, 2014

[tt] Live Science: Viking Women Colonized New Lands, Too

Reivsionism marches on. The reason the Vikings had such a bad press is
that accounts of the Vikings were written by a biased source, namely

Viking Women Colonized New Lands, Too
[dropped the URL. sorry.]

by Tia Ghose, Staff Writer
Date: 07 December 2014 Time: 07:01 PM ET

Vikings may have been family men who traveled with their wives to
new lands, according to a new study of ancient Viking DNA.

Maternal DNA from ancient Norsemen closely matches that of
modern-day people in the North Atlantic isles, particularly from the
Orkney and Shetland Islands.

The findings suggest that both [9]Viking men and women sailed on the
ships to colonize new lands. The new study also challenges the
popular conception of Vikings as glorified hoodlums with impressive
seafaring skills. [[10]Fierce Fighters: 7 Secrets of Viking Men]

"It overthrows this 19th century idea that the Vikings were just
raiders and pillagers," said study co-author Erika Hagelberg, an
evolutionary biologist at the University of Oslo in Norway. "They
established settlements and grew crops, and trade was very, very

Vikings hold a special place in folklore as manly warriors who
terrorized the coasts of France, England and Germany for three
centuries. But the Vikings were much more than pirates and
pillagers. They established far-flung trade routes, [11]reached the
shores of present-day America, settled in new lands and even founded
the modern city of Dublin, which was called Dyfflin by the Vikings.

Some earlier genetic studies have suggested that [12]Viking males
traveled alone and then brought local women along when they settled
in a new location. For instance, a 2001 study published in the
[13]American Journal of Human Genetics suggested that Norse men
brought Gaelic women over when they colonized Iceland.

Modern roots

To learn more about Norse colonization patterns, Hagelberg and her
colleagues extracted teeth and shaved off small wedges of long bones
from 45 Norse skeletons that were dated to between A.D. 796 and A.D.
1066. The skeletons were first unearthed in various locations around
Norway and are now housed in the Schreiner Collection at the
University of Oslo.

The team looked at DNA carried in the mitochondria, the energy
powerhouses of the cell. Because mitochondria are housed in the
cytoplasm of a woman's egg, they are passed on from a woman to her
children and can therefore reveal maternal lineage. The team
compared that material with mitochondrial DNA from 5,191 people from
across Europe, as well as with previously analyzed samples from 68
ancient Icelanders.

The ancient Norse and Icelandic genetic material closely matched the
maternal DNA in modern North Atlantic people, such as Swedes, Scots
and the English. But the ancient Norse seemed most closely related
to people from Orkney and Shetland Islands, [14]Scottish isles that
are quite close to Scandinavia.

Mixed group

"It looks like women were a more significant part of the
colonization process compared to what was believed earlier," said
Jan Bill, an archaeologist and the curator of the [15]Viking burial
ship collection at the Museum of Cultural History, a part of the
University of Oslo.

That lines up with historical documents, which suggest that Norse
men, women and children--but also Scottish, British and Irish
families--colonized far-flung islands such as Iceland, Bill told
Live Science. Bill was not involved with the new study.

"This picture that we have of Viking raiding--a band of long ships
plundering--there obviously would not be families on that kind of
ship," Bill said. "But when these raiding activities started to
become a more permanent thing, then at some point you may actually
see families are traveling along and staying in the camps."

As a follow-up, the team would like to compare ancient Norse DNA to
ancient DNA from Britain, Scotland and the North Atlantic Isles, to
get a better look at exactly how all these people are related,
Hagelberg said.

The findings were published today (Dec. 7) in the journal
Philosophical Transactions of the Royal Society B.


tt mailing list

[tt] NS 2999: Google's new bot-trap trains machines to see the world

NS 2999: Google's new bot-trap trains machines to see the world
* 10 December 2014 by Hal Hodson

The firm's updated Captchas still tell websites you are a person not
a robot - but now they are helping computers to recognise real
objects on the web

GOOGLE updated its captchas on Wednesday. Those fuzzy words and
blurred numbers that are simple for humans (in theory) to decipher
and hard for bots, are there to guard websites, email services and
social networks from automated attack. But they are also fuelling
the development of artificial intelligence.

As spammers and their bots get ever better at breaking captchas,
Google continues to make them harder - as anyone who has ever failed
to decipher a mysterious swirl of lettering can attest.
Alternatives, such as audio versions for people with a visual
impairment, are less secure and often equally baffling.

The latest revamp fixes some of these problems, in part by doing
away with the tests for recognised users. People that Google already
knows are human can tick a box to affirm that "I'm not a robot" (see
"Do I know you from somewhere?"). Those who aren't automatically
recognised can now pick matching images out of a grid - cats in a
sea of dogs and hamsters, for instance.

Luis von Ahn, a computer scientist at Carnegie Mellon University in
Pittsburgh, Pennsylvania, realised he could put our captcha-solving
prowess to good use, and so find a silver lining to the internet's
bot problem. Instead of having us merely prove our humanness, Von
Ahn designed the system to record and model that humanness and the
intelligence behind it.

Von Ahn sold his technology to Google in 2009, and the internet
giant has since improved on it, building a software engine that
teaches computers to recognise new things. As internet users verify
their humanness, solving problems that only human minds can figure
out, they build data sets that can be fed to algorithms. As well as
reducing spam and blocking bots, Google's system uses the humans it
protects to turn the web into training for AIs, with the goal of
improving their ability to recognise real-world objects.

A team led by Dumitru Erhan at Google is working on a way to
automatically caption pictures, and can already accurately add tags
like "A group of people shopping at an outdoor market", for example.
Other research teams are on a similar path.

Roman Yampolskiy, a computer scientist at the University of
Louisville in Kentucky says the information gathered through
captchas has helped machine learning algorithms to reach their
current state.

As humans tear through each new generation of captchas, the data
they generate can teach machine learning algorithms to solve the
same problems, giving computers a handle on more and more of the
world. Yampolskiy and his graduate students are working on a system
that will allow any set of images on the internet to be used for
captchas, enabling humans to train AI to tell different animals
apart, or, say, tell cars from washing machines.

For the most part, data from captchas go to feed artificial neural
networks, the computing systems that are inspired by the way the
brain works.

Captcha's evolution is evidence of the progress AI is making. The
first puzzles can now be solved automatically by computers 99.8 per
cent of the time, according to Google. Captchas are one of the most
prominent human interfaces with this progress. Face recognition and
search algorithms are improving behind the scenes, but mostly we are
all stuck solving those annoying little tests, man and bot.

Do I know you from somewhere?

How does Google know when to give real people a free pass on to a
site, without cat categorising (see main story)? When it recognises

Google has what is probably the largest store of personal data on
the planet. So it is fairly simple for the company to match its
records up with incoming IP addresses and cookies - the small piece
of code that websites use to keep track of their visitors. So they
can be pretty sure that the person trying to pass the captcha is

This has sparked some fears that Google is making it harder for
non-Google users to prove their humanness around the rest of the
web, as many services also use Google's captchas.
tt mailing list

[tt] NS 2999: Forget Turing - I want to test computer creativity

NS 2999: Forget Turing - I want to test computer creativity
* 15 December 2014 by Sean O'Neill

The Turing test is too easy - creativity should be the benchmark of
human-like intelligence, says Mark Riedl, inventor of the Lovelace
2.0 test

Mark Riedl is an associate professor at Georgia Tech's School of
Interactive Computing in Atlanta. His work straddles artificial
intelligence, virtual worlds and storytelling. He has developed a
new form of the Turing test, called the Lovelace 2.0 test

What are the elements of the Turing test?
The Turing test was a thought experiment that suggested if someone
can't tell the difference between a human and a computer when
communicating with them using just text chat or something similar,
then whatever they're chatting with must be intelligent. When Alan
Turing wrote his seminal paper on the topic in 1950, he wasn't
proposing that the test should actually be run. He was trying to
convince people that it might be possible for computers to have
human-like abilities, but he had a hard time defining what
intelligence was.

Why do you think the test needs upgrading?
It has been beaten at least three times now by chatbots, which
almost every AI researcher will tell you they don't think are very

A 2001 test called the Lovelace test tried to address this, right?
Yes. That test, named after the 19th-century mathematician Ada
Lovelace, was based on the notion that if you want to look at
human-like capabilities in AI, you mustn't forget that humans create
things, and that requires intelligence. So creativity became a proxy
for intelligence. The researchers who developed that test proposed
that an AI can be asked to create something - a story or poem, say -
and the test would be passed only if the AI's programmer could not
explain how it came up with its answer. The problem is, I'm not sure
that the test actually works because it's very unlikely that the
programmer couldn't work out how their AI created something.

How is your Lovelace 2.0 test different?
In my test, we have a human judge sitting at a computer. They know
they're interacting with an AI, and they give it a task with two
components. First, they ask for a creative artefact such as a story,
poem or picture. And secondly, they provide a criterion. For
example: "Tell me a story about a cat that saves the day"; or "Draw
me a picture of a man holding a penguin."

Must the artefacts be aesthetically pleasing?
Not necessarily. I didn't want to conflate intelligence with skill:
the average human can play Pictionary but can't produce a Picasso.
So we shouldn't demand super-intelligence from our AIs.

What happens after the AI presents the artefact?
If the judge is satisfied with the result, they make another, more
difficult, request. This goes on until the AI is judged to have
failed a task, or the judge is satisfied that it has demonstrated
sufficient intelligence. The multiple rounds means you get a score
as opposed to a pass or fail. And we can record a judge's various
requests so that they can be tested against many different AIs.

So your test is more of an AI comparison tool?
Exactly. I'd hate to make a definitive prediction of what it will
take for an AI to achieve human-like intelligence. That's a
dangerous sort of thing to say.
tt mailing list

[tt] NS 2999: The fight back against rape and death threats online

NS 2999: The fight back against rape and death threats online
* 12 December 2014 by Aviva Rutkin

[Leader: "Stamping out online abuse will take a concerted effort" added.]

Twitter, police and law courts are waking up to the ugly abuse that
women endure online, with systems, algorithms and trials closing in
on perpetrators

"CAN'T wait to rape you." That's one of the anonymous messages sent
earlier this year to Janelle Asselin, a comic-book editor and writer
based in Los Angeles.

Asselin had written an article criticising the cover of the comic
book Teen Titans #1. The response: a slew of tweets and comments
from people questioning her credentials, calling her unprintable
names and sending rape threats. "Most of the women I know with a
solid online presence get them regularly. This is just a thing we
are forced to deal with," Asselin later wrote in a blog post.

Asselin's story is not unusual. Concern over such online harassment
has escalated in the wake of Gamergate, a movement that purports to
call for better ethics in game journalism but has become known for
ugly attacks aimed at its detractors.

Now, it seems, the problem might have finally peaked. Social-media
companies, activist groups and even federal judges are rethinking
how to best handle online abuse - and how to make internet users
feel safe.

"The goal of these online mobs is to scare you enough to silence
you," says Allyson Kapin, founder of online-marketing firm Rad
Campaign and the Women Who Tech network in Washington DC. "People
really want to see social networks step in and begin to take a stand
against online harassment."

Twitter in particular has stepped up its game. In November it teamed
up with Women, Action, and the Media, a gender-justice group that
will study how the company handles abuse problems and suggest
possible improvements. And last week, the firm announced changes
that will make it easier to flag problematic messages and accounts.

Attention was drawn to the issue of social-media abuse this year by
several high-profile cases: game developer Brianna Wu was forced to
flee her home; feminist media critic Anita Sarkeesian was scared
away from speaking at Utah State University over death threats; and
actress Zelda Williams was driven to social-media silence after
nasty messages about her father Robin Williams's suicide.

But such harassment is not restricted to a few unlucky targets - a
study by Pew Research released earlier this year revealed that 40
per cent of internet users have been harassed online. More than half
said that they did not know the perpetrator, and 66 per cent said
the most recent abuse occurred on social media, rather than in
emails, comment sections or online games.

Twitter's upcoming changes make it easier and faster to remove the
harassers. Before, users had to fill out a nine-part questionnaire
to report an offensive tweet; many of these questions have now been
removed or streamlined. In addition, the company is attempting to
better tackle the sheer volume of complaints by prioritising tweets
that are flagged by large groups of people.

Artificial intelligence could help, too. At the Massachusetts
Institute of Technology, Karthik Dinakar has developed algorithms
that can detect abusive speech in online comments. Such a program
could feasibly be implemented in any of the major social networks.
Dinakar suggests that software such as his could encourage users to
rethink aggressive messages before they hit "send" by, for example,
imposing a 30-second waiting period or by querying a potentially
troublesome message ("Do you really want to say that to 600

"People say things online that they would never say face-to-face or
in person," says Dinakar. "We need better tools not just for
monitoring, but also to tell people what good digital citizenry is
all about."

Social networks can only do so much, however: deleted accounts can
be quickly replaced by new ones. And many police departments are
unsure how to investigate internet threats or, worse, dismiss them
as less serious than real-world threats. In her book Hate Crimes in
Cyberspace, University of Maryland law professor Danielle Citron
argues that online harassment is a civil-rights issue that faces the
same stigmas that stymied sexual-harassment cases in the 1970s.

Better police training could be a big help. At the College of
Policing in the UK, a new internet-friendly curriculum will teach
officers to take these reports more seriously and advise victims on
how to collect evidence. Others have suggested equipping
law-enforcement agencies with the spyware tools necessary to search
for and ensnare offenders.

Some of the legal questions might be answered in Elonis vs United
States, a case in which abusive comments made on Facebook by a man
about his wife are being scrutinised by the US Supreme Court.
Although the case does not directly address mob-like harassment,
such as that directed against women by Gamergate, it could signify a
shift in how US law defines free speech online.

While the court deliberates, private companies have an opportunity
to be agents of change, says Mary Anne Franks, a law professor at
the University of Miami in Florida. They are not required to enforce
free speech, but can instead set their own rules for what is
permissible on their platform.

"We have this tremendous potential at our disposal," Franks says. "A
company like Google or Twitter or Facebook could easily
revolutionise how we engage in discourse."

Five ways to protect yourself online

If you are being threatened online, there are things you can do to
protect yourself.

Save everything Even if you would rather just delete the threats,
it's important to save copies of everything. Those documents could
help authorities to find and prosecute the offender.

Report the abuse Most social-media sites offer users a way to report
harassment. You can also file a complaint with the police or online
at the Internet Crime Complaint Center if you are in the US.

Filter it San Francisco engineer Randi Harper got fed up with
negative reactions to a blog post she had written about her
experiences with sexism and harassment. She built "Good Game Auto
Blocker", which automatically finds Twitter users associated with
Gamergate mobs and adds them to a block list.

Hire a detective If the situation gets really bad, firms such as
Cyber Investigation Services in Tampa, Florida, specialise in
exposing anonymous harassers through psychological profiles,
internet forensics and decoy websites.

Tell their mum Australian video-game journalist Alanah Pearce
sometimes receives threats on her public Facebook page. When she
realised that many of her trolls were just kids, she started
tracking down their mothers' profiles and sending screenshots of the
concerning messages. One shocked mum forced her son to send Pearce a
handwritten letter of apology.
Leader: Stamping out online abuse is a job for all of us
* 12 December 2014

Governments, tech giants and individuals must act together if we are
to clamp down on threats and abuse in the digital world

DOES technology create spaces that are beyond the law? The issue of
who should take responsibility for stamping out online abuse might
make it seem so.

This year has been marked by mounting concern about online hate
speech, from Islamist to misogynist. But who should decide when a
post goes too far? Last week Peter Fahy, chief constable of Greater
Manchester Police, said he and his colleagues in the security
establishment should not be left to judge when unpopular opinions
become actionable abuse. That, he warned, risked "a drift to a
police state". He might have added that the sheer volume of such
cases defies conventional policing.

Unconventional policing, then? It is tempting to ask the tech
companies behind the social media boom to build tools that can
automatically spot abuse and address it. But even a spotting system
orders of magnitude better than anything available today would
generate huge numbers of false positives when dealing with, say,
Facebook traffic.

Many such companies have been reluctant to engage with this issue,
citing a generally admirable commitment to freedom of speech. And
they are loath to act as anything other than a carrier of messages,
although that stance is growing hard to square with their desire to
store and analyse every last scrap of our personal data.

But just because there is no easy balance doesn't mean we should
give up. And indeed, hands-off attitudes are giving way to more
active governance (see "The fight back against rape and death
threats online"). Ultimately, the state, the tech giants and their
users will have to work in concert to make online spaces more civil.
That's as it should be: the best policing is by consent, not diktat
(see "Why are US police so prone to violence?").

For that, the state must eschew the temptation to overreach, even
with the best of intentions. Tech giants must stop posturing as
cocky upstarts and live up to their responsibilities as corporate
citizens. And individuals will have to start thinking harder before
they hit "send". It's time we lived up to our responsibilities, too.
tt mailing list

[tt] NS 2999: How to Think about... (12 articles)

My essay, How to Think about Rising Inequality, was littered with the
phrase "think about."

NS 2999: How to Think about... (12 articles)
et seq.

1. Space-time
* 12 December 2014 by Anil Ananthaswamy

Space-time. Often described as the fabric of reality, this
four-dimensional amalgamation of space and time was set at the heart
of physics by Einstein (see "How to think about... Relativity"). But
what is it?

A popular way of envisaging space-time is as a stretchy rubber sheet
that deforms when a mass is placed on it, with the varying curvature
analogous to the warping of space-time by gravity.

It's a picture that might lead us to believe space-time is itself
something physical or tangible. But the physical manifestation of
the dimensions we move through is, if anything, the fields they
contain (see "How to think about... Fields"). For most physicists,
space-time itself is a lot more abstract - a purely mathematical
backdrop for the unfolding drama of the cosmos. Martin Bojowald of
Penn State University in Philadelphia sees it as a mathematical
entity called a manifold. The equations of general relativity allow
us to calculate the evolution of this manifold, and so of the
universe itself, over time. "The rubber sheet is a picture for such
a manifold, so in an abstract way I am indeed using the analogy," he

Don Marolf of the University of California, Santa Barbara, goes even
further. "Visualising the 'shape' of space-time is very useful," he
says. "But most of us don't visualise it as something particularly
physical. To the extent that we draw pictures, they are just chalk
lines on the blackboard."

One thing that unifies all of these conceptions of space-time is
that it is a "continuum", something that varies smoothly with no
abrupt knobs, bumps or tears. But if we want to combine general
relativity with quantum mechanics to create a unified theory of
quantum gravity, that notion must change. In quantum gravity,
space-time is made up of tiny discrete quanta just like everything
else - making it a fabric with a discernible warp and weft.

"Fabric as opposed to a rubber sheet means that we are focusing more
on what possible microstructures space-time may have," says
Bojowald. Carlo Rovelli of the University of Marseilles, France,
visualises this woven microstructure as being made of "tiny fuzzy
blobs", the starting point of his theoretical investigations of
quantum gravity. It's still just a device, though: something that
helps him work with the intangible. "If I do not have an image in my
head, I cannot even start thinking," he says.
2. Computing
* 13 December 2014 by Douglas Heaven

We all know what computers are, right? They sit on our desks and in
our pockets, and put the smarts into everything from cars to washing

That's not wrong - and yet it's not entirely right.

At its most basic, a computer takes information as an input,
transforming it according to some predetermined rules into a
different output. The digital electronic computers that rule our
world do this using little pulses of electric current. But there's
no reason it has to be that way. "An abacus allows us to compute by
moving stones around," says Peter Bentley at University College
London. "If you can do that, I struggle to think of anything you
cannot compute with."

Sundials convert shadows to time, the liver regulates chemical
outputs according to inputs, even rocks store mineral compositions
for later breakdown and release: all of these things fit the
broadest definition of a computer. "The notion of 'computation'
currently appears to float dangerously free of its foundations,"
says Mark Bishop of Goldsmiths, University of London.

One way out is to suggest a hierarchy of computing machines. At the
bottom are "finite state machines": things like traffic lights and
elevators that do little more than cycle through a limited series of
input and output states.

Digital computers fall into the category of Turing machines. As
conceived by Alan Turing in the 1930s, these read symbols from an
infinitely long input tape and substitute them according to a set of
rules, thereby simulating the behaviour of any conceivable
algorithm. Basic though it seems, this still provides perhaps the
best understanding of the limits of computation, says Bentley.

According to Turing's model, though, there are well-defined problems
that no computer can answer, such as the self-referential Halting
problem, which asks "Will this program stop?". No computer can say
yay or nay without actually running the program (and possibly
stopping). Other problems, though theoretically computable, take an
almost endless time to solve. "Computer science was born very
conscious of its limitations," says Christos Papadimitriou at the
University of California, Berkeley.

Models of computing more powerful than Turing machines do exist;
Turing himself speculated about them (New Scientist, 19 July, p 34).
Some people think biological processes might be able
to implement such "super-Turing" computation, accounting, perhaps,
for some of our own excessive smarts. Others think that super-Turing
models can only work by breaking the laws of physics as we know
them. If so, that suggests an intriguing and deep-seated connection
between notions of computability and the workings of the universe.
If we can't quite get our head around computers, perhaps it is
because we are sitting in the biggest of them all.
3. Quantum reality
* 12 December 2014 by Anil Ananthaswamy

"I believe in an external physical reality beyond my own
experience," says Johannes Kofler of the Max Planck Institute of
Quantum Optics in Garching, Germany. "The world would be there
without me, and was there before me, and will be there after me."

Given what we know about quantum physics, that seems a bold
statement. The assaults that this most fundamental theory of reality
makes on our intuition are legion: particles that exist as
probabilistic wave functions in "superpositions" of multiple states
or places, or at least seem to as long you don't look at them;
"entangled" particles that influence each other over vast distances
of space when you measure one of them.

For Aephraim Steinberg at the University of Toronto in Canada,
dealing with such troubling concepts is a matter of retraining our
brains. "As much as we talk about 'counter-intuitiveness' of quantum
mechanics, we just mean that it's counter to the intuitions we have
before we learn quantum mechanics," he says. After all, we aren't
that great at second-guessing aspects of classical reality, either:
how many of us would naturally say that feathers and bricks fall at
the same rate under gravity?

With quantum physics, though, it doesn't help that the quantities
used to describe objects seem to exist only mathematically.
Visualising a wave function as a real thing is fine for a single
particle, but things rapidly get more tricky. "Once you're talking
about more than one particle, the wave function lives in some
high-dimensional space I don't know how to visualise," says
Steinberg. He ends up having to break a complex quantum system down
into parts. "But they're all merely ways of chipping away at the
abstract mathematical object I know provides the complete

More fundamentally, though, if you accept quantum physics at face
value then at least one of two dearly held principles from the
classical world must give. One is realism, the idea that every
object has properties that exist without you measuring them. The
other is locality, the principle that nothing in the universe can
influence anything else "instantaneously" - faster than the speed of

For most quantum physicists, it's realism that has to give, given
all the evidence that the cosmic speed limit is never broken. The
Copenhagen interpretation is the most widespread of the approaches
that result, says Kofler. This demands that at least some properties
of microscopic objects don't exist prior to and independent of
measurement. Alternatively, physicists resort to the many worlds
interpretation, in which all possible results of a measurement
happen, each spawning a different universe - whatever that means.

It's fair to say that no one really gets all this. That means
practitioners of quantum physics need to guard against relying too
heavily on new intuitions and imagery, says Steinberg. "That's
exactly the point at which one develops a dangerous
4. Infinity
* 13 December 2014 by Richard Webb

Ian Stewart has an easy, if not particularly helpful, way of
envisaging infinity. "I generally think of it as: (a) very big, but
(b) bigger than that," says the mathematician from the University of
Warwick in the UK. "When something is infinite, there is always some
spare room around to put things in."

Infinity is one of those things with a preprogrammed boggle factor.
Mathematically, it started off as a way of expressing the fact that
some things, like counting, have no obvious end. Count to 146 and
there's 147; count to a trillion and say hello to a trillion and
one. There are two ways of dealing with this, says Stewart. "You can
sum it up boldly as 'there are infinitely many numbers'. But if you
want to be more cautious, you just say 'there is no largest

Only in the late 19th century did mathematicians plump for the first
option, and begin to handle infinity as an object with properties
all its own. The key was set theory, a new way of thinking of
numbers as bundles of things. The set of all whole numbers, for
example, is a well-defined and unique object, and it has a size:

The sting in the tail, as the German mathematician Georg Cantor
showed, is that by this definition there is more than one infinity.
The set of the whole numbers defines one low-lying sort, known as
countable infinity. But add in all the numbers in between, with as
many decimal places as you please, and you get a smoother, more
continuous infinity - one defined by a set that is infinitely

That is just the beginning. Hugh Woodin is a set theorist at Harvard
University who has a whole level of infinity named after him, a
particularly vertiginous level populated with numbers known as
Woodin cardinals. "They are so large you can't deduce their
existence," he says.

Such infinities help solve otherwise unsolvable problems in less
rarefied mathematical landscapes below. They are the ultimate
abstraction: although you can manipulate them logically, you can't
write formulae incorporating them or devise computer programs to
test predictions about them. Woodin's notepads consist mainly of
cryptic marks he uses to focus his attention, to the occasional
consternation of fellow plane passengers. "If they don't try to
change seats they ask me if I'm an artist," he says.

How closely our common-sense conception of endlessness matches the
mathematical infinities isn't clear. But if we can't quite grasp
boundarylessness, it probably doesn't matter, says Woodin - however
you slice it, infinity seems far removed from anything we see in the
real world. Perhaps those enigmatic markings aren't so different
from those of his fellow passengers after all. "It might be we're
just playing a game," says Woodin. "Perhaps we are just doing some
glorified sudoku puzzle."
5. Deep time
* 14 December 2014 by Graham Lawton

In June 1788, Scottish geologist James Hutton took his colleagues
John Playfair and James Hall to Siccar Point on the Berwickshire
coast. To unenlightened eyes, the rocky promontory would have
appeared eternal and unchanging. But Hutton and his fellow
travellers knew better. As Playfair later wrote: "The mind seemed to
grow giddy by looking so far into the abyss of time."

Visit Siccar Point today and it still appears almost exactly as it
did in 1788. Hutton realised that this continuity was an illusion,
and that the sequence of events he could read in the rocks spoke of
unimaginably slow changes occurring over mind-expanding stretches of
time. The "angular unconformity" of rocks layers of varying types
and orientation could only have formed over tens of millions of
years. It was a crucial piece of evidence in his theory of Earth's
gradual evolution and the revolutionary concept of deep time.

Little more than a century earlier the Primate of All Ireland,
Archbishop James Ussher, had used the Bible and other sources to
pinpoint the date of creation to Sunday 23 October 4004 BC. Isaac
Newton disagreed: he thought the year was 3988 BC. Then, as now,
deep time went deeply against the grain of common sense. "Measuring
things against a human lifespan is a normal and natural way to
think," says John McNeill, an environmental historian at Georgetown
University in Washington DC.

Through the heroic efforts of Hutton and many after him, we now know
that Earth is around 4.54 billion years old and the universe about
13.8 billion. Our world is almost inconceivably old.

Deep time was central to the development of the historical sciences
- geology, evolutionary biology and cosmology - and remains so.
"Their ruling ideas absolutely depend on dealing with deep time,"
says McNeill. Without it, we can't appreciate that some processes,
whether the weathering of rocks, the evolution of species or the
formation of galaxies, generally occur on timescales so slow as to
be inappreciable within a human lifespan. "We might be misled into
supposing all is stationary, as most people in most cultures have,"
says McNeill.

As Playfair found, trying to conceive deep time can induce vertigo,
and our odd glimpses of it - ancient fossils dug from geological
strata, pictures of star fields from the deep cosmos - can be hard
to deal with. But for McNeill it is also liberating. "What it means
to me is that we are all part of an unimaginably long chain of
being, both human and non-human, and our own travails don't amount
to a hill of beans."
6. The big bang
* 14 December 2014 by Richard Webb

Space is big, wrote Douglas Adams in The Hitchhiker's Guide to the
Galaxy. "Really big. You just won't believe how vastly hugely
mindbogglingly big it is." Too right: the edge of the observable
universe is some 46 billion light years away. Within that volume
there are anything between 100 and 200 billion galaxies, each
containing hundreds of billions of stars.

If that weren't mind-blowing enough, according to the big bang
theory - our best stab at explaining how it all came to be -
everything exploded into being from nowhere, about 13.8 billion
years ago. An infinitesimal pinprick of unimaginable heat and
density has slowly stretched and cooled into the cosmos we know

How can we get our heads around that? For cosmologist Martin Rees of
the University of Cambridge there are two strategies: bury yourself
in equations, or draw pictures. "I'd put myself in the picture
camp," he says.

He envisages the expanding universe by imagining himself at one node
of a three-dimensional lattice stretching as far as the mind's eye
can see, with the nodes linked by rods, all of which are expanding.
That way you can visualise the universe moving away from you in all
directions - while recognising that you would see the same thing
from any other node. "You understand there is no central position,"
he says.

There's also no discernible edge: periods of accelerated expansion
during the universe's earliest instants and in more recent aeons
mean that the horizon of the observable universe is by no means the
end of all things. Beyond it are galaxies we will never see because
the intervening space is expanding too fast, so their light can
never reach us. "I think it is fair to say they are retreating from
us faster than the speed of light," says Rees.

Even more challenging is to envisage what came before the big bang.
Our current conceptions of physics suggest the question makes no
sense: as we rewind time to the very first instants, the intense
concentration of energy jumbles up even space and time in a
confusion of... stuff. "There is no direction of time, so there's no
before and after," says Rees. "The analogy that's always made is
it's like asking what's north of the North Pole."

Not that this stops some physicists trying. The theory of eternal
inflation proposes that when the universe began in the big bang, it
might just have been one of a multitude budding off from a larger
entity - and that other universes might constantly be budding off
our own. "Is there a beginning in that theory?" asks Rees. "If you
have universes sprouting and they are governed by different physical
laws, what then? That's where my intuition breaks down completely."
7. Probability
* 15 December 2014 by Richard Webb

Probability is one of those things we all get wrong... deeply wrong.
The good news is we're not the only ones, says John Haigh, a
mathematician at the University of Sussex in Brighton, UK, and
author of Probability: A very short introduction. "Many pure
mathematicians claim that probability has many unreasonable

Take the classic problem of a class of 25 schoolchildren. How likely
is it that two of them share the same birthday? The common-sense
answer is that it is not implausible, but quite unlikely. Wrong:
it's actually just under 57 per cent.

Or the celebrated Monty Hall problem, named after the former host of
US television game show Let's Make a Deal. You're playing a game in
which there are three doors, one hiding a car, two of them goats
(see illustration). You choose one door; the host of the game then
opens another, revealing a goat. Assuming you'd rather win a car
than a goat, should you stick with your choice or swap?

The naive answer is it doesn't matter: you now have a 50-50 chance
of striking lucky with your original door. Wrong again.

But if probability makes even experts grumble, how do we get it
right? Simple, says mathematician Ian Stewart of the University of
Warwick in the UK: do things the hard way. "The important thing with
probability is not to intuit it," he says. Think carefully about how
the problem is posed and do your sums diligently, and you'll arrive
at the right answer - eventually.

With the birthday problem, the starting point is to realise that
you're not interested in individual schoolchildren, but pairs. In a
class of 25, there are 300 pairs to consider and in most years 365
days on which each might share a birthday. Factor all that in, and
you end up crunching some truly astronomical numbers to arrive at
the answer. "Any coincidence like that is remarkable in itself, but
when you ask how many times it would happen, that number is so vast
it's not remarkable at all," says Haigh.

With the Monty Hall problem, meanwhile, the chance you chose the
right door in the first place is 1/3 - and that doesn't change
whatever happens afterwards. Since the host has revealed a goat,
there is now a 2/3 probability that the car is behind the other door
- and you are better off swapping.

There are a few caveats: if the host is so devious as only to open a
door if you chose the right one in the first place, you'd be mad to
swap. Ditto if you want a goat rather than the car. That illustrates
another important rule in thinking about probability, says Haigh.
"It is very important to know your assumptions. Very subtle changes
can change the outcome."

All this is very well when the boundaries of the problem are clear
and the possible outcomes quantifiable. Toss a fair coin and you
know you have a 50 per cent chance of heads - because you can repeat
the exercise over and over again if necessary.

But what about a 50 per cent chance of rain today, or of a horse
with even odds winning a race? No amount of expert advice can help
us assess the true worth of such "subjective" probabilities, which
are fluid and often based on inscrutable expertise or complex
modelling of an unpredictable world. Sometimes you do just have to
go with your gut instinct - and be prepared to be wrong.
8. Fields
* 16 December 2014 by Richard Webb

Frank Close has a question. "If you step off the top of a cliff, how
does the Earth down there 'know' you are up there for it to attract
you?" It's a question that has taxed many illustrious minds before
him. Newton's law of gravitation first allowed such apparently
instantaneous "action at a distance", but he himself was not a fan,
describing it in a letter as "so great an Absurdity that I believe
no Man who has in philosophical Matters a competent Faculty of
thinking can ever fall into it".

Today we ascribe such absurdities to fields. "The idea of some
physical mediation - a field of influence - is more satisfying,"
says Close, a physicist at the University of Oxford. Earth's
gravitational field, for example, extends out into space in all
directions, tugging at smaller objects like the moon and us on top
of a cliff; the Earth itself is under the spell of the sun's
gravitational field.

But hang on: what exactly is a field?

On one level, it is just a map. "Ultimately, a field is something
that depends on position," says Frank Wilczek, a theoretical
physicist at the Massachusetts Institute of Technology. A
gravitational field tells us the strength of gravity at different
points in space. Temperatures or isobars on a weather chart are a
field. A field is a mathematical abstraction - numbers spread over

But there is more to it than that. Witness what physicist Michael
Faraday saw in the 19th century, and many a schoolkid has since:
iron filings neatly ordering themselves along the lines of a
magnetic field, reaching out into space from the magnet itself and
influencing nearby objects (though at the speed of light, not
instantaneously). "It made a huge impression on Faraday, that this
strange thing had a physical reality," says Wilczek.

Arguably the modern world is built on the principle of
electromagnetic induction that Faraday developed out of his new
understanding of fields: magnetic fields and electric fields power
the motors of our civilisation. A mere abstraction?

The modern era has shed some further light on fields, but also added
confusion. Quantum fields - ultimately, the electromagnetic field is
one - have tangible products in the form of particles, which pop up
as disturbances within them. For the electromagnetic field, this
entity is the photon. The Higgs field, long postulated to pervade
empty space and to give elementary particles their mass, was
discovered in 2012 by squeezing out its particles in high-energy

But quantum fields are complicated beasts, formed of
"superpositions" of many classical fields. That's far away from
anything we can envisage as a map, or delineate as neat lines. "At
that point I have to rely on equations," says Wilczek, who won a
Nobel prize for his work on the quantum fields of the strong nuclear

One thing's for sure: fields are everywhere. Quantum theory teaches
us that even seemingly empty space is a roiling broth of fields and
their associated particles. "The idea that nothing's there is
extremely naive," says Wilczek. Aside from anything else, fields are
the proof that nature does indeed abhor a vacuum.
9. Mathematics
* 15 December 2014 by Catherine de Lange

Mathematics has a fearsome rep as the discipline of iron logic. But
for its practitioners sometimes the best way to think clearly is to
think vaguely

Mathematics is like a language - but one that, thanks to its inbuilt
logic, writes itself. That's how mathematician Ian Stewart sees it,
anyway. "You can start writing things down without knowing exactly
what they are, and the language makes suggestions to you." Master
enough of the basics, and you rapidly enter what sports players call
"the zone". "Suddenly it gets much easier," Stewart says. "You're
propelled along."

But what if you don't have such a maths drive? It's wrong to think
it's all down to talent, says mathematician and writer Alex Bellos:
even the best exponents can take decades to master their craft. "One
of the reasons people don't understand maths is they don't have
enough time," he says. "It's not supposed to be easy."

Sketching a picture of the problem helps. Take negative numbers.
Five sheep are easy enough to envisage, but what about minus five?
"We can't see the minus five sheep, so you can't get your head
around it," says Bellos. It was only when someone had the bright
idea of arranging all the existing numbers 0,1,2,3... on a line that
it became obvious where the negative numbers fitted in. Similarly,
complex numbers - 2D numbers that underpin the mathematics of
quantum theory, among other things - only really took off with the
advent of a "complex plane" in which to depict them.

Analogies also help. If thinking about ellipses oppresses you, think
about a circle that's been squashed and work from there, says
Stewart. Overall, contrary to the impression of mathematics as a
discipline of iron logic, the best way to attack a problem of any
sort is often to get a brief overview of it, skip over anything you
can't work out and then go back and fill in the details. "A lot of
mathematicians say it's important to be able to think vaguely," says
10. Relativity
* 16 December 2014 by Richard Webb

Space and time used to be so simple. You trundled around reasonably
freely in the three dimensions of the one, and experienced
occasional heartache at the remorseless forward march of the other.
C'est la vie.

Or is it? Einstein revolutionised our perceptions a century ago
when, in his theories of relativity, he first forbade anything in
the cosmos from travelling faster than the speed of light, and then
bundled both space and time into one unified space-time that can be
warped by gravity (see "How to think about... Space-time"). The
contortions introduced by Einstein's special and general theories
make intervals in both space and time dependent on where we measure
them from. Two observers with flashlights in fast-moving trains
might both measure the other to have flashed their flashlight first
- and both be right from their own point of view.

The recent blockbuster Interstellar is based on
premises that Einstein made technically plausible, if not (yet)
technologically feasible: that by travelling close to the speed of
light, or moving in an intense gravitational field such as that of a
black hole, we age more slowly than those we leave behind on Earth
(see diagram). We don't need to travel that far to see less dramatic
effects of relativity in action. Astronauts on the International
Space Station age a little less because of the velocity at which
they travel, and a little more for enjoying less of the gravity of
mothership Earth. The effects don't quite cancel out. Velocity wins,
leaving each ISS astronaut who completes a six-month tour of duty
0.007 seconds younger than someone who stayed on Earth.

For most everyday purposes, such effects matter not a jot. But for
physicists like Sean Carroll of the California Institute of
Technology in Pasadena, who peer deep into the cosmos, relativity is
a crucial consideration. He often resorts to drawing a diagram. "As
far as special relativity goes, it's very natural to think in terms
of pictures," he says. Relativity can seem full of paradoxes if we
don't first think carefully about how our own motion affects our
perception of how time is passing for others - but also how others
might see our time passing differently, too.

Carroll has a few rules of thumb to guide his own perceptions.
"Basically, time is kind of like space, but not exactly," he says.
The main difference is that whereas in space a straight line is the
shortest distance between two points, in time it is the longest. The
way to minimise the time you experience between two events that
occur at the same point in space is to move as far and as fast as
you can in the interim. "If you zoom off near the speed of light,
then zoom back, you will experience less time than someone who
simply sits still," says Carroll. So time passes slowly when you're
having fun.
11. Evolution
* 17 December 2014 by Michael Le Page

In a cave, a bear gives birth to two cubs one long dark night. In
the morning, the weak winter light reveals something strange: the
cubs' fur is white, in stark contrast to the dark fur of their
mother. They are freaks... or are they?

What is evolution? Easy, you might think: it's the way living
organisms change over time, driven by natural selection. Well,
that's not wrong, but it's not really how evolutionary biologists
think of it.

Picture those bear cubs. Here we see a dramatic physical change, but
it isn't evolution. Among black and brown bears, white bear cubs are
not that uncommon. But white bears don't have more cubs than other
bears, so the gene variants for white fur remain rare.

Among one group of brown bears living in the Arctic, though, white
fur was an advantage, helping them sneak up on prey. There white
bears thrived and had more offspring - their "fitness" increased -
so the proportion of white bears rose until the entire population
was white. This is definitely evolution. It happened as polar bears
evolved from brown bears a few million years ago.

So although we tend to think about evolution in terms of the end
results - physical changes in existing species or the emergence of
new ones - the key concept is the spread of genetic variants within
a population.

This results of this process can appear purposeful. Indeed, it is
convenient to talk as if they are: "polar bears evolved white fur
for camouflage". But it all comes down to cold numbers: a random
mutation that boosts fitness spreading in a population.

What's more surprising is that even mutations that don't increase
fitness can spread through a population as a result of random
genetic drift. And most mutations have little, if any, effect on
fitness. They may not affect an animal's body or behaviour at all,
or do so in an insignificant way such as slightly altering the shape
of the face. In fact, the vast majority of genetic changes in
populations - and perhaps many of the physical ones, too - may be
due to drift rather than natural selection. "Do not assume that
something is an adaptation until you have evidence," says biologist
Larry Moran at the University of Toronto, Canada.

So it is wrong to think of evolution only in terms of natural
selection; change due to genetic drift counts too. Moran's minimal
definition does not specify any particular cause: "Evolution is a
process that results in heritable changes in a population spread
over many generations."

It does not even have to involve many generations, says Michael
Kinnison of the University of Maine in Orono, who studies how living
species are evolving. Evolution occurs almost continuously, he says.
It usually takes time for populations to change significantly, but
sometimes it happens very fast, for instance when only individuals
of a particular genetic type survive some catastrophe, or when only
tumour cells with a particular mutation are not killed by a cancer

In these cases, there is no need to wait for the survivors to
reproduce to determine that the population has changed. "I would say
evolution occurs whenever some process changes the distribution of
heritable traits in a population, regardless of the time scale,"
Kinnison says. "While evolutionary biologists like to treat
evolution as a generation-to-generation process, that is often more
a matter of convenience than reality."

Suppose those white bear cubs somehow reached an island and founded
a new bear population. The interbreeding of white bears always
produces white offspring, and thus being white would be normal
there. So we can boil down the concept of evolution to just six
words: Evolution is what makes freaks normal.
12. Alien contact
* 24 December 2014 by Douglas Heaven

"We must realise that there are other worlds in other parts of the
universe, with races of different men and different animals." That
was the Roman poet Lucretius, writing in the 1st century BC.

Only in the past few decades have we grasped the truth of the first
part of that statement, thanks to planet-hunters such as NASA's
Kepler space telescope. Almost 5000 suspected planets have already
been spotted outside our solar system, and stars nursing planets
seem to be the rule, not the exception. With hundreds of billions of
stars in our galaxy alone, that's an awful lot of worlds.

Surely, then, it's only a matter of time before we confirm the
second part by finding signs of life.

Perhaps. Life on Earth has required billions of years to evolve
organisms capable of asking such questions, and that process has
been anything but inevitable. Other life, if it exists, may be
nothing like life as we know it.

This is the central dilemma of searches for ET, says Jeffrey Scargle
of the NASA Ames Research Centre in Moffett Field, California. "You
can't assume nothing because then you don't even know how to start
looking, but if you assume too much then you're biased and you're
not open to finding a lot of things that might be there."

So alien life might be carbon-based, and ultimately get its energy
from starlight through a process like photosynthesis. Or it might
work entirely differently, in which case searches for the metabolic
products of carbon-based life in far-off atmospheres - oxygen,
methane and the like - will never be a smoking gun.

Perhaps more conscious signals from similarly questioning advanced
civilisations are a better bet. "It's natural to assume that any
intelligent civilisation would have that concept of needing to wave
a flag," says Scargle. He is studying Kepler data for signs of
"star-tickling": variations in a star's brightness induced by aliens
using it in some way as a beacon. But as far as decoding any message
goes, he's sceptical: even our idea of what counts as a regular
pattern might not be shared by others.

Scargle's colleague Lucianne Walkowicz points out that even looking
for something like radio transmissions might be assuming too much.
"Earth is getting quieter as it gets more advanced, not louder," she
says. And what if we were to pick up an incontrovertible signal of
alien civilisation from, say, 10,000 light years away across the
Milky Way? That would tell us of something existing 10,000 years ago
- a long time by the measure of human civilisation to date. Perhaps
even a lively cosmos is a lonely one.
tt mailing list

Thursday, December 18, 2014

[tt] NYT: As Robots Grow Smarter, American Workers Struggle to Keep Up

As Robots Grow Smarter, American Workers Struggle to Keep Up

by Claire Cain Miller

A machine that administers sedatives recently began treating
patients at a Seattle hospital. At a Silicon Valley hotel, a bellhop
robot delivers items to people's rooms. Last spring, a software
algorithm wrote a breaking news article about an earthquake that The
Los Angeles Times published.

Although fears that technology will displace jobs are at least as
old as the Luddites, there are signs that this time may really be
different. The technological breakthroughs of recent years--
allowing machines to mimic the human mind--are enabling machines
to do knowledge jobs and service jobs, in addition to factory and
clerical work.

And over the same 15-year period that digital technology has
inserted itself into nearly every aspect of life, the job market has
fallen into a long malaise. Even with the economy's recent
improvement, the share of working-age adults who are working is
substantially lower than a decade ago--and lower than any point in
the 1990s.

Economists long argued that, just as buggy-makers gave way to car
factories, technology would create as many jobs as it destroyed. Now
many are not so sure.

Lawrence H. Summers, the former Treasury secretary, recently said
that he no longer believed that automation would always create new
jobs. "This isn't some hypothetical future possibility," he said.
"This is something that's emerging before us right now."

Erik Brynjolfsson, an economist at M.I.T., said, "This is the
biggest challenge of our society for the next decade."

Mr. Brynjolfsson and other experts say they believe that society has
a chance to meet the challenge in ways that will allow technology to
be mostly a positive force. In addition to making some jobs
obsolete, new technologies have also long complemented people's
skills and enabled them to be more productive--as the Internet and
word processing have for office workers or robotic surgery has for

More productive workers, in turn, earn more money and produce goods
and services that improve lives.

"It is literally the story of the economic development of the world
over the last 200 years," said Marc Andreessen, a venture capitalist
and an inventor of the web browser. "Just as most of us today have
jobs that weren't even invented 100 years ago, the same will be true
100 years from now."
Yet there is deep uncertainty about how the pattern will play out
now, as two trends are interacting. Artificial intelligence has
become vastly more sophisticated in a short time, with machines now
able to learn, not just follow programmed instructions, and to
respond to human language and movement.

At the same time, the American work force has gained skills at a
slower rate than in the past--and at a slower rate than in many
other countries. Americans between the ages of 55 and 64 are among
the most skilled in the world, according to a recent report from the
Organization for Economic Cooperation and Development. Younger
Americans are closer to average among the residents of rich
countries, and below average by some measures.

Clearly, many workers feel threatened by technology. In a recent New
York Times/CBS News/Kaiser Family Foundation poll of Americans
between the ages of 25 and 54 who were not working, 37 percent of
those who said they wanted a job said technology was a reason they
did not have one. Even more--46 percent--cited "lack of
education or skills necessary for the jobs available."

Self-driving vehicles are an example of the crosscurrents. They
could put truck and taxi drivers out of work--or they could enable
drivers to be more productive during the time they used to spend
driving, which could earn them more money. But for the happier
outcome to happen, the drivers would need the skills to do new types
of jobs.

The challenge is evident for white-collar jobs, too. Ad sales agents
and pilots are two jobs that the Bureau of Labor Statistics projects
will decline in number over the next decade. Flying a plane is
largely automated today and will become more so. And at Google, the
biggest seller of online ads, software does much of the selling and
placing of search ads, meaning there is much less need for

There are certain human skills machines will probably never
replicate, like common sense, adaptability and creativity, said
David Autor, an economist at M.I.T. Even jobs that become automated
often require human involvement, like doctors on standby to assist
the automated anesthesiologist, called Sedasys.

Elsewhere, though, machines are replacing certain jobs.
Telemarketers are among those most at risk, according to a recent
study by Oxford University professors. They identified recreational
therapists as the least endangered--and yet that judgment may
prove premature. Already, Microsoft's Kinect can recognize a
person's movements and correct them while doing exercise or physical

Other fields could follow. The inventors of facial recognition
software from a University of California, San Diego lab say it can
estimate pain levels from children's expressions and screen people
for depression. Machines are even learning to taste: The Thai
government in September introduced a robot that determines whether
Thai food tastes sufficiently authentic or whether it needs another
squirt of fish sauce.

Watson, the computer system built by IBM that beat humans at
Jeopardy in 2011, has since learned to do other human tasks. This
year, it began advising military veterans on complex life decisions
like where to live and which insurance to buy. Watson culls through
documents for scientists and lawyers and creates new recipes for
chefs. Now IBM is trying to teach Watson emotional intelligence.

IBM, like many tech companies, says Watson is assisting people, not
replacing them, and enabling them to be more productive in new types
of jobs. It will be years before we know what happens to the
counselors, salespeople, chefs, paralegals and researchers whose
jobs Watson is learning to do.
Stepping Out of the Labor Force

The percentage of people ages 25 to 54 who do not work:


Source: Bureau of Labor Statistics

Whether experts lean toward the more pessimistic view of new
technology or the most optimistic one, many agree that the
uncertainty is vast. Not even the people who spend their days making
and studying new technology say they understand the economic and
societal effects of the new digital revolution.

When the University of Chicago asked a panel of leading economists
about automation, 76 percent agreed that it had not historically
decreased employment. But when asked about the more recent past,
they were less sanguine. About 33 percent said technology was a
central reason that median wages had been stagnant over the past
decade, 20 percent said it was not and 29 percent were unsure.

Perhaps the most worrisome development is how poorly the job market
is already functioning for many workers. More than 16 percent of men
between the ages of 25 and 54 are not working, up from 5 percent in
the late 1960s; 30 percent of women in this age group are not
working, up from 25 percent in the late 1990s. For those who are
working, wage growth has been weak, while corporate profits have

"We're going to enter a world in which there's more wealth and less
need to work," Mr. Brynjolfsson said. "That should be good news. But
if we just put it on autopilot, there's no guarantee this will work

Some say the nature of work will need to change. Google's
co-founder, Larry Page, recently suggested a four-day workweek, so
as technology displaces jobs, more people can find employment.
Others believe the role of the public sector should expand, to help
those struggling to find work. Many point to education, in new
technologies and in the skills that remain uniquely human, like
creativity and judgment.

"The answer is surely not to try to stop technical change," Mr.
Summers said, "but the answer is not to just suppose that
everything's going to be O.K. because the magic of the market will
assure that's true."
tt mailing list

[tt] NYT: Innovators of Intelligence Look to Past

Innovators of Intelligence Look to Past


Seattle--Inside the Allen Institute for Artificial Intelligence,
known as AI2, everything is a gleaming architectural white. The
walls are white, the furniture is white, the counters are white. It
might as well have been a set for the space station in "2001: A
Space Odyssey."

"The brilliant white was a conscious choice meant to evoke
experimental science--think 'white lab coat,' " said Oren Etzioni,
a computer scientist and director of the new institute, which the
Microsoft co-founder Paul Allen launched this year as a sibling of
the Allen Institute for Brain Science, his effort to map the human

Yet for the 30 (soon to be 50) artificial-intelligence researchers
who can look out on a striking view of downtown Seattle, the
futuristic surroundings offer a paradoxical note: AI2 is an effort
to advance artificial intelligence while simultaneously reaching
back into the field's past.

While Silicon Valley looks to fashionable techniques like neural
networks and machine learning that have rapidly advanced the state
of the art, Dr. Etzioni remains a practitioner of a modern version
of what used to be known as Gofai, for good old-fashioned artificial

The reference goes back to the earliest days of the field in the
1950s and '60s, when artificial-intelligence researchers were
confident they could model human intelligence using symbolic systems
--logic embedded in software programs, running on powerful

Then in the late 1980s, an early wave of commercial
artificial-intelligence companies failed, bringing on what became
known as the "A.I. winter." The field was seen as a failure and went
into eclipse.

In recent years, however, A.I. has come roaring back as speech
recognition, machine vision and self-driving cars have made progress
with powerful computers, cheap sensors and machine-learning
techniques. That has started a Silicon Valley gold rush led by
Google, Facebook and Apple, drawing outsiders like Alibaba and Baidu
in China, all caught up in a frantic race to hire the world's best
machine-learning talent.

But the debate over how to reach genuine artificial intelligence has
not ended, and Dr. Etzioni and Mr. Allen are betting that their path
is more pragmatic. The power of the new techniques is not disputed,
but there is a growing debate over whether they can take the field
to human-level capabilities by themselves.

"Think of it as Sherlock Holmes versus Spider-Man," said Jerry
Kaplan, a visiting lecturer at Stanford who teaches a course on the
history and philosophy of artificial intelligence, comparing
Holmes's deductive powers with the irrational "spider sense" that
tingles at the base of Spider-Man's skull and alerts him to danger.

Mr. Allen, who noted that he came from a family of librarians, said
his decision to fund an artificial-intelligence research lab was
inspired by the question of how books and other knowledge might be
encoded to become the basis for computer interactions in which human
questions might be answered more fully.

"AI2 was born from a desire to create a system that could truly
reason about knowledge, rather than just offer up what had been
written on a subject before," he wrote in an email interview.

Dr. Etzioni says that the artificial-intelligence field has made
incremental advances in areas like vision and speech, but that we
have gotten no closer to the larger goal of true human-level

"Driverless cars are a great thing," he said, but added that the
field had given rise to "bad A.I., like the N.S.A. is using it or
Facebook is using it to track you."

"We want to be the good guys," he went on, "and it's up to us to
deliver on that."

Moreover, he says, both he and Mr. Allen believe that technology
cannot be separated from its social and economic consequences. They
have added a social mission to the project that they call
"artificial intelligence for the common good."

The success or failure of the project, however, will ultimately
hinge on whether Dr. Etzioni can create a new synthesis of
artificial intelligence, weaving together powerful machine-learning
tools with traditional logic-oriented software.

The current fad for big data, of which machine learning is a major
component, has significant limits. "If you step back a little and
say we want to do A.I., then you will realize that A.I. needs
knowledge, reasoning and explanation," he said. "My argument is that
big data has made great progress in limited areas."

Even Watson, the brainy IBM computer whose intelligence the company
wants to apply in complex applications like medical diagnoses and
automated call centers with interactive speech recognition, will
soon reach fundamental limits, he argues.

"I really don't want a system that can't explain itself to be my
doctor," he said. "I can just imagine sitting there with Dr. Watson
and the program saying, 'Well, we need to remove a kidney, Mr.
Etzioni,' and I'm like, 'What?!' and they respond, 'Well, we have a
lot of variables and a lot of data, and that's just what the model
says.' "

Dr. Etzioni, 50, was already known for innovative web projects,
including MetaCrawler, an early search engine, and an array of
successful start-up companies; one of them, Farecast, was acquired
by Microsoft and became the basis for its Bing Travel service. (The
first student to major in computer science at Harvard, he is a son
of the well-known sociologist Amitai Etzioni.)

At AI2 he is motivated by Mr. Allen's view that "in order to be
truly intelligent, computers must understand--that is probably the
critical word," as the Microsoft co-founder put it in a 1977

Some technology experts argue that self-aware computing machines are
now on the horizon. "As for A.I. progress, we're mostly haggling
about a few decades," said Hans Moravec, a leading roboticist who is
the chief scientist of Seegrid Corporation, a maker of autonomous
vehicles for warehouse applications. "I'm content to simply watch it
play out, trying to do my part. I do want fully autonomous robots as
soon as possible, to begin visiting the rest of the universe."

Mr. Allen and Dr. Etzioni are not so optimistic. Both are skeptical
of claims that we may be only years away from machines that think in
any human sense.

"Full A.I., in the sense of something like HAL in '2001,' " Mr.
Allen wrote in an email interview, "is probably a hundred years away
(or more). In reality, we are only beginning to grasp how deep
intelligence works."

Dr. Etzioni wants AI2 to set measurable goals to help get a new
class of learning systems off the ground. During its first year, the
researchers have focused on three projects--one in computer vision
(in which computers learn to recognize images), one to build a
reasoning system capable of taking standardized school tests, and a
third to help scholars deal with the fire hose of information that
is inundating every scientific field.

The school-test effort, Project Aristo, seeks to create a learning
program that can collect and organize a wide range of information,
and then use that database to reason and to answer questions, even
discussing and explaining its answers with human users.

To chart Aristo's progress, researchers plan to test it on
increasingly difficult standardized science exams, moving from the
fourth grade through the 12th.

"We're not planning on putting 10th graders out of work," Dr.
Etzioni said. But he does believe that a program that can converse
with humans and answer questions would serve as a foundation for
many other achievements, going far beyond the most powerful search
engines and systems like Watson.

In September, the researchers celebrated their first milestone--60
percent correct answers in the language portion of New York State's
fourth-grade science test. Many of the questions in the actual test
include diagrams and illustrations, which will ultimately require
advances in computer vision.

That challenge is considered far more difficult than recognizing
human speech. It calls for a computer system with "scene
understanding," the human ability to extract meaning from animate
and inanimate objects that interact.

Whether AI2's research leads to a new generation of thinking machine
or just more incremental advances, the project is a clear indication
that artificial intelligence has once again become the defining
force in the software world.

"The narrative has changed," said Peter Norvig, Google's director of
research. "It has switched from, 'Isn't it terrible that artificial
intelligence is a failure?' to 'Isn't it terrible that A.I. is a
tt mailing list