Kira Radinsky has written an algorithm that dissects old news stories and other Internet postings to look for past cause and effect, and then can alert us to possible disasters, geopolitical events, and disease outbreaks.
Not one to let Amazon get all the robots-are-our-future attention, Google’s revealed that its already got plenty of projects underway, and they’re headed up by Andy Rubin. Yep, the man who made the company’s Android operating system arguably the biggest smartphone OS in the world.
Android robot on Android operating system, a perfect fit!
Although some people might find the idea of love with a machine repulsive, experts predict that as the technology advances and robots become more human-like, we will view our silicon cousins in a friendlier light. As the future unfolds, robots will fill more roles as family caregivers, household servants, and voice-enabled avatars that manage our driverless cars, automated homes, and entertainment systems.
A discussion of the cellular automata ideas described by Stephen Wolfram (the guy behind WolframAlpha) in his book A New Kind of Science. The article provides a brief overview of the cellular automata explaining why Wolfram considers it to be so remarkable. The author of the article, Ray Kurzweil, compares the cellular automata with other computational models such as evolutionary algorithms and Turing Machine. A great read if you are not quite ready for 1200 pages of the original piece.
So what is the discovery that has so excited Wolfram? As I noted above, it is cellular automata rule 110, and its behavior. There are some other interesting automata rules, but rule 110 makes the point well enough. A cellular automaton is a simple computational mechanism that, for example, changes the color of each cell on a grid based on the color of adjacent (or nearby) cells according to a transformation rule. Most of Wolfram’s analyses deal with the simplest possible cellular automata, specifically those that involve just a one-dimensional line of cells, two possible colors (black and white), and rules based only on the two immediately adjacent cells. For each transformation, the color of a cell depends only on its own previous color and that of the cell on the left and the cell on the right. Thus there are eight possible input situations (i.e., three combinations of two colors). Each rule maps all combinations of these eight input situations to an output (black or white). So there are 28 = 256 possible rules for such a one-dimensional, two-color, adjacent-cell automaton. Half of the 256 possible rules map onto the other half because of left-right symmetry. We can map half of them again because of black-white equivalence, so we are left with 64 rule types. Wolfram illustrates the action of these automata with two-dimensional patterns in which each line (along the Y axis) represents a subsequent generation of applying the rule to each cell in that line.
Most of the rules are degenerate, meaning they create repetitive patterns of no interest, such as cells of a single color, or a checkerboard pattern. Wolfram calls these rules Class 1 automata. Some rules produce arbitrarily spaced streaks that remain stable, and Wolfram classifies these as belonging to Class 2. Class 3 rules are a bit more interesting in that recognizable features (e.g., triangles) appear in the resulting pattern in an essentially random order. However, it was the Class 4 automata that created the “ah ha” experience that resulted in Wolfram’s decade of devotion to the topic. The Class 4 automata, of which Rule 110 is the quintessential example, produce surprisingly complex patterns that do not repeat themselves. We see artifacts such as lines at various angles, aggregations of triangles, and other interesting configurations. The resulting pattern is neither regular nor completely random. It appears to have some order, but is never predictable.
Why is this important or interesting? Keep in mind that we started with the simplest possible starting point: a single black cell. The process involves repetitive application of a very simple rule 3. From such a repetitive and deterministic process, one would expect repetitive and predictable behavior. There are two surprising results here. One is that the results produce apparent randomness. Applying every statistical test for randomness that Wolfram could muster, the results are completely unpredictable, and remain (through any number of iterations) effectively random. However, the results are more interesting than pure randomness, which itself would become boring very quickly. There are discernible and interesting features in the designs produced, so the pattern has some order and apparent intelligence. Wolfram shows us many examples of these images, many of which are rather lovely to look at.
Toyota was the latest to step on-board the self-driving car hype machine this week when they announced they would offer a car with automated driving technologies by the mid-2010s.
Self-driving is a bit exaggerated here. In reality, most automakers introduce smarter cruise control and automatic parking features rather the complete self-driving experience of Google cars. On the other hand, a gradual introduction of autonomous driving makes it easier for society and laws to adapt to this kind of technology.
New type of transistors that are better suited for AI, e.g. implementation of artificial neural network models and statistical machine learning. Compared to traditional silicon transistors, it has the following advantages:
Douglas Hofstadter, the Pulitzer Prizewinning author of Gödel, Escher, Bach, thinks we’ve lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind.
This article sheds light on the question whether systems like IBM’s Watson or Apple’s Siri are intelligent or just seem to behave intelligently. It also provides a good historical context to understand the origins and fundamentals of artificial intelligence.
With the DARPA Robotics Challenge looming large on the horizon, it’s easy to overlook robots that aren’t taking part. One of them was Nino, a humanoid unveiled earlier this year by the National Taiwan University’s Robotics Laboratory. Unlike the DARPA robots, Nino may not find itself performing tasks in dangerous situations any time soon. But this robot has some special skills: It is likely the first full-sized humanoid to demonstrate sign language.
The article provides an interesting discussion of Siri and the future of personal assistant technology.
Siri is an infant in the world of artificial intelligence. Spend any serious amount of time with it and it clearly doesn’t do what a personal assistant needs to do – it doesn’t learn. Siri is unlikely to be quicker than Googling the information yourself. Learning and then predicting behaviour is where these technologies really start to make a difference.
From a computer science perspective, learning the behaviour of a single user is tough. This is the small data problem; unlike big data, where patterns and trends easily emerge, individual human beings can be unpredictable and can change behaviour, which is not helpful for pattern-hunting algorithms.
Qualcomm presented their own chip that simulates artificial neural networks (ANN) on the hardware level. Although simulations of ANNs in software have been done for a couple of decades, energy and cost efficient hardware implementations are rare. SyNAPSE project by IBM is a similar project.
Getting this chips into phones and fitness trackers will make personal assistant features like Siri and Google Now even smarter.
Pursuing goals usually means we trod on other people’s goals. The exception is that as we become more intelligent — and find more alternatives to attain our goals and solve our problems — we can find more ways of doing so without impinging on other people’s goals. A superintelligence would, by definition, be able to find even more ways to solve problems and create solutions that do not hurt others. Therefore, increases in intelligence would increase the probability of friendliness.
My argument will be is that for a superintelligent AI it is more rational to manipulate others for its own advantage than using resources to hurt them.
With much of our attention focused the rise of advanced artificial intelligence, few consider the potential for radically amplified human intelligence (IA). Itâs an open question as to which will come first, but a technologically boosted brain could be just as powerful â and just as dangerous â as AI.
"telepathic Google" sounds really cool, delivering information and ads directly to your brain.
Tom Simonite reports at MIT Technology News that a new research group within Facebook is working on an emerging and powerful approach to artificial intelligence known as deep learning, which uses simulated networks of brain cells to process data. Applying this method to data shared on Facebook…
Looks like Facebook is interested in sentiment analysis - determine if your post is “sad” or ”happy”.
SRI International is developing a “Digital Aristotle” — a computerized system that uses artificial intelligence to answer novel questions and solve advanced problems in a broad range of scientific disciplines.
Google now wants to transform words that appear on a page into entities that mean something and have related attributes. It’s what the human brain does naturally, but for computers, it’s known as Artificial Intelligence.
Although it sounds less fascinating than some kind of flying robot, this type of research can actually have a great influence on our productivity by improving the process of finding the information. Google is certainly not the first attempting to extract more knowledge from web pages than keywords but given the resources Google has, both material and intellectual, it is reasonable to expect practical results soon (1-3 years).
Want to know what people think about a particular issue, a product or a public person thеn sentiment analysis is the technology you might be interested in. The link above demonstrates how sentiment analysis is applied to Twitter feeds to mine the opinions about Obama.
The task of sentiment analysis is to determine the polarity of text, such as negative, positive or neutral. The basic way to do it is to look for words that express the corresponding attitude. Example of positive words are “good”, “satisfied”, “happy”; negatives are “fail”, “sad”, “broken” and so on. The trick is to collect a large vocabulary of such words automatically and to consider the context where they are used in.