Bitcoin

German ICO Savedroid Pulls Exit Scam After Raising $50 Million (techcrunch.com) 185

German company Savedroid has pulled a classic exit scam after raising $50 million in ICO and direct funding. The site is currently displaying a South Park meme with the caption "Aannnd it's gone." The founder, Dr. Yassin Hankir, has posted a tweet thanking investors and saying "Over and out." TechCrunch reports: A reverse image search found Hankir's photo on this page for Founder Institute, and he has pitched his product at multiple events, including this one in German. Savedroid was originally supposed to use AI to manage user investments and promised a crypto-backed credit card, a claim that CCN notes is popular with scam ICOs. It ran for a number of months and was clearly well-managed as the group was able to open an office and appear at multiple events.
Facebook

Facebook To Design Its Own Processors For Hardware Devices, AI Software, and Servers (bloomberg.com) 55

Facebook is the latest technology company to design its own semiconductors, reports Bloomberg. "The social media company is seeking to hire a manager to build an 'end-to-end SoC/ASIC, firmware and driver development organization,' according to a job listing on its corporate website, indicating the effort is still in its early stages." From the report: Facebook could use such chips to power hardware devices, artificial intelligence software and servers in its data centers. Next month, the company will launch the Oculus Go, a $200 standalone virtual-reality headset that runs on a Qualcomm processor. Facebook is also working on a slew of smart speakers. Future generations of those devices could be improved by custom chipsets. By using its own processors, the company would have finer control over product development and would be able to better tune its software and hardware together. The postings didn't make it clear what kind of use Facebook wants to put the chips to other than the broad umbrella of artificial intelligence. A job listing references "expertise to build custom solutions targeted at multiple verticals including AI/ML," indicating that the chip work could focus on a processor for artificial intelligence tasks. Facebook AI researcher Yann LeCun tweeted about some of the job postings on Wednesday, asking for candidates interested in designing chips for AI.
Robotics

Europe Divided Over Robot 'Personhood' (politico.eu) 246

Politico Europe has an interesting piece which looks at the high-stakes debate between European lawmakers, legal experts and manufacturers over who should bear the ultimate responsibility for the actions by a machine: the machine itself or the humans who made them?. Two excerpts from the piece: The battle goes back to a paragraph of text, buried deep in a European Parliament report from early 2017, which suggests that self-learning robots could be granted "electronic personalities." Such a status could allow robots to be insured individually and be held liable for damages if they go rogue and start hurting people or damaging property.

Those pushing for such a legal change, including some manufacturers and their affiliates, say the proposal is common sense. Legal personhood would not make robots virtual people who can get married and benefit from human rights, they say; it would merely put them on par with corporations, which already have status as "legal persons," and are treated as such by courts around the world.

AI

The Impact of Artificial Intelligence on Innovation (nber.org) 64

Abstract of a paper [PDF] which was originally published last month: Artificial intelligence may greatly increase the efficiency of the existing economy. But it may have an even larger impact by serving as a new general-purpose "method of invention" that can reshape the nature of the innovation process and the organization of R&D. We distinguish between automation-oriented applications such as robotics and the potential for recent developments in "deep learning" to serve as a general-purpose method of invention, finding strong evidence of a "shift" in the importance of application-oriented learning research since 2009.

We suggest that this is likely to lead to a significant substitution away from more routinized labor-intensive research towards research that takes advantage of the interplay between passively generated large datasets and enhanced prediction algorithms. At the same time, the potential commercial rewards from mastering this mode of research are likely to usher in a period of racing, driven by powerful incentives for individual companies to acquire and control critical large datasets and application-specific algorithms. We suggest that policies which encourage transparency and sharing of core datasets across both public and private actors may be critical tools for stimulating research productivity and innovation-oriented competition going forward.

Businesses

Can We Build Indoor 'Vertical Farms' Near The World's Major Cities? (vox.com) 253

Vox reports on the hot new "vertical farming" startup Plenty: The company's goal is to build an indoor farm outside of every city in the world of more than 1 million residents -- around 500 in all. It claims it can build a farm in 30 days and pay investors back in three to five years (versus 20 to 40 for traditional farms). With scale, it says, it can get costs down to competitive with traditional produce (for a presumably more desirable product that could command a price premium)... It has enormous expansion plans and a bank account full of fresh investor funding, but most excitingly, it is building a 100,000 square foot vertical-farming warehouse in Kent, Washington, just outside of Seattle... It recently got a huge round of funding ($200 million in July, the largest ag-tech investment in history), including some through Jeff Bezos's investment firm, so it has the capital to scale...; heck, it even lured away the director of battery technology at Tesla, Kurt Kelty, to be executive of operations and development...

The plants receive no sunlight, just light from hanging LED lamps. There are thousands of infrared cameras and sensors covering everything, taking fine measurements of temperature, moisture, and plant growth; the data is used by agronomists and artificial intelligence nerds to fine-tune the system... There are virtually no pests in a controlled indoor environment, so Plenty doesn't have to use any pesticides or herbicides; it gets by with a few ladybugs... Relative to conventional agriculture, Plenty says that it can get as much as 350 times the produce out of a given acre of land, using 1 percent as much water.

Though it may use less water and power, to be competitive with traditional farms companies like Plenty will also have to be "even better at reducing the need for human planters and harvesters," the article warns.

"In other words, to compete, it's going to have to create as few jobs as possible."
AI

AI Can Generate a 3D Model of a Person After Watching a Few Seconds of Video (sciencemag.org) 43

An anonymous reader shares a report: A new algorithm creates 3D models using standard video footage from one angle. The system has three stages. First, it analyzes a video a few seconds long of someone moving -- preferably turning 360-degree to show all sides -- and for each frame creates a silhouette separating the person from the background. Based on machine learning techniques -- in which computers learn a task from many examples -- it roughly estimates the 3D body shape and location of joints. In the second stage, it "unposes" the virtual human created from each frame, making them all stand with arms out in a T shape, and combines information about the T-posed people into one, more accurate model. Finally, in the third stage, it applies color and texture to the model based on recorded hair, clothing, and skin.
AI

Google Works Out a Fascinating, Slightly Scary Way For AI To Isolate Voices In a Crowd (arstechnica.com) 45

An anonymous reader quotes a report from Ars Technica: Google researchers have developed a deep-learning system designed to help computers better identify and isolate individual voices within a noisy environment. As noted in a post on the company's Google Research Blog this week, a team within the tech giant attempted to replicate the cocktail party effect, or the human brain's ability to focus on one source of audio while filtering out others -- just as you would while talking to a friend at a party. Google's method uses an audio-visual model, so it is primarily focused on isolating voices in videos. The company posted a number of YouTube videos showing the tech in action.

The company says this tech works on videos with a single audio track and can isolate voices in a video algorithmically, depending on who's talking, or by having a user manually select the face of the person whose voice they want to hear. Google says the visual component here is key, as the tech watches for when a person's mouth is moving to better identify which voices to focus on at a given point and to create more accurate individual speech tracks for the length of a video. According to the blog post, the researchers developed this model by gathering 100,000 videos of "lectures and talks" on YouTube, extracting nearly 2,000 hours worth of segments from those videos featuring unobstructed speech, then mixing that audio to create a "synthetic cocktail party" with artificial background noise added. Google then trained the tech to split that mixed audio by reading the "face thumbnails" of people speaking in each video frame and a spectrogram of that video's soundtrack. The system is able to sort out which audio source belongs to which face at a given time and create separate speech tracks for each speaker. Whew.

AI

FDA Approves AI-Powered Software To Detect Diabetic Retinopathy (engadget.com) 34

The U.S. Food and Drug Administration (FDA) has just approved an AI-powered device that can be used by non-specialists to detect diabetic retinopathy in adults with diabetes. Engadget reports: Diabetic retinopathy occurs when the high levels of blood sugar in the bloodstream cause damage to your retina's blood vessels. It's the most common cause of vision loss, according to the FDA. The approval comes for a device called IDx-DR, a software program that uses an AI algorithm to analyze images of the eye that can be taken in a regular doctor's office with a special camera, the Topcon NW400. The photos are then uploaded to a server that runs IDx-DR, which can then tell the doctor if there is a more than mild level of diabetic retinopathy present. If not, it will advise a re-screen in 12 months. The device and software can be used by health care providers who don't normally provide eye care services. The FDA warns that you shouldn't be screened with the device if you have had laser treatment, eye surgery or injections, as well as those with other conditions, like persistent vision loss, blurred vision, floaters, previously diagnosed macular edema and more.
AI

How Will Automation Affect Different US Cities? (northwestern.edu) 98

Casino dealers and fishermen are both likely to be replaced by machines in coming years. So which city will lose more of its human workforce -- Las Vegas, the country's gambling capital, or Boston, a major fishing hub? From a research: People tend to assume that automation will affect every locale in the same, homogeneous way, says Hyejin Youn, an assistant professor of management and organization at Kellogg. "They have never thought of how this is unequally distributed across cities, across regions in the U.S." It is a high-stakes question. The knowledge that certain places will lose more jobs could allow workers and industries to better prepare for the change and could help city leaders ensure their local economies are poised to rebound. In new research, Youn and colleagues seek to understand how machines will disrupt the economies of individual cities. By carefully analyzing the workforces of American metropolitan areas, the team calculated what portion of jobs in each area is likely to be automated in coming decades. You can run your city's name, and also the job position you're curious about here.
AI

The US Military Desperately Wants To Weaponize AI (technologyreview.com) 179

Artificial intelligence is a transformative technology, and US generals already see it as the next big weapon in their arsenal. From a report: War-machine learning: Michael Griffin, Undersecretary of Defense for Research and Engineering, signaled how keen the military is to make use of AI at the Future of War 2018 conference held in Washington, DC, yesterday. Saber rattling: "There might be an artificial intelligence arms race, but we're not yet in it," Griffin said. In reference to China and Russia, he added, "I think our adversaries -- and they are our adversaries -- understand very well the possible future utility of machine learning, and I think it's time we did as well."
AI

Zuckerberg Testimony: Facebook AI Will Curb Hate Speech In 5 To 10 Years (inverse.com) 469

An anonymous reader quotes a report from Inverse: After a question from Senator John Thune (R-SD) about why the public should believe that Facebook was earnestly working towards improving privacy, Zuckerberg essentially responded by saying that things are different now. Zuckerberg said that the platform is going through a "broad philosophical shift in how we approach our responsibility as a company." "We need to now take a more proactive view at policing the ecosystem," he said. In part, Zuckerberg was talking about hate speech and the various ways his platform has been used to seed misinformation. This prompted Thune to ask what steps Facebook was taking to improve its ability to define what is and what is not hate speech.

"Hate speech is one of the hardest," Zuckerberg said. "Determining if something is hate speech is very linguistically nuanced. You need to understand what is a slur and whether something is hateful, and not just in English..." Zuckerberg said that the company is increasingly developing AI tools to flag hate speech proactively, rather than relying on reactions from users and employees to flag offensive content. But according to the CEO, because flagging hate speech is so complex, he estimates it could take five to 10 years to create adequate A.I. "Today we're just not there on that," he said. For now, Zuckerberg said, it's still on users to flag offensive content. "We have people look at it, we have policies to try and make it as not subjective as possible, but until we get it more automated there is a higher error rate than I'm happy with," he said.

AI

Elon Musk Is Paying For Free Streaming of a New Documentary about AI Dangers (syfy.com) 185

An anonymous reader quotes Syfy.com: There's a new documentary warning about the perils of artificial intelligence out there, and Elon Musk wants you to see it. So much so that he's making it available to stream for free this weekend. The documentary -- Do You Trust This Computer? -- explores the rise of machine intelligence and its possible consequences... Check out the trailer, and then proceed to be creeped way the hell out.... "It's a subject that I feel we should be paying close attention to," said Musk in a news release. "I think it's important that a lot people see this movie, so I'm paying for it to be seen to the world for free this weekend."
Musk attended the premier of the film with the creator of HBO's Westworld, and tweeted Saturday that the video had 5 million views in just 36 hours.

Musk himself is interviewed in the film, warning of the dire possibility of "an immortal dictator from which we can never escape."
Transportation

California Police Ticket A Self-Driving Car (cbslocal.com) 344

Long-time Slashdot reader Ichijo writes: A self-driving car was slapped with a ticket after police said it got too close to a pedestrian on a San Francisco street.

The self-driving car owned by San Francisco-based Cruise was pulled over for not yielding to a pedestrian in a crosswalk. Cruise says its data shows the person was far away enough from the vehicle and the car did nothing wrong.... According to data collected by Cruise, the pedestrian was 10.8 feet away from the car when, while the car was in self-driving mode, it began to continue down Harrison at 14th St."

The person in the crosswalk was not injured.

Security

Best Buy Warns of Data Breach (usatoday.com) 25

Best Buy, along with Delta Air Lines and Sears, says that [24]7.ai, a company that provides the technology backing its chat services, was hacked between September 27 and October 12, potentially jeopardizing the personal payment details of "a number of Best Buy customers." The electronics company said in a statement that "as best we can tell, only a small fraction of our overall online customer population could have been caught up in this... incident whether or not they used the chat function." They will reach out to customers who were impacted.
AI

Researchers Develop Device That Can 'Hear' Your Internal Voice (theguardian.com) 108

Researchers have created a wearable device that can read people's minds when they use an internal voice, allowing them to control devices and ask queries without speaking. From a report: The device, called AlterEgo, can transcribe words that wearers verbalise internally but do not say out loud, using electrodes attached to the skin. "Our idea was: could we have a computing platform that's more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?" said Arnav Kapur, who led the development of the system at MIT's Media Lab.

Kapur describes the headset as an "intelligence-augmentation" or IA device, and was presented at the Association for Computing Machinery's Intelligent User Interface conference in Tokyo. It is worn around the jaw and chin, clipped over the top of the ear to hold it in place. Four electrodes under the white plastic device make contact with the skin and pick up the subtle neuromuscular signals that are triggered when a person verbalises internally. When someone says words inside their head, artificial intelligence within the device can match particular signals to particular words, feeding them into a computer.

Electronic Frontier Foundation

EFF: Google Should Not Help the US Military Build Unaccountable AI Systems (eff.org) 110

The Electronic Frontier Foundation's Peter Eckersley writes: Yesterday, The New York Times reported that there is widespread unrest amongst Google's employees about the company's work on a U.S. military project called "Project Maven." Google has claimed that its work on Maven is for "non-offensive uses only," but it seems that the company is building computer vision systems to flag objects and people seen by military drones for human review. This may in some cases lead to subsequent targeting by missile strikes. EFF has been mulling the ethical implications of such contracts, and we have some advice for Google and other tech companies that are considering building military AI systems.
The EFF lists several "starting points" any company, or any worker, considering whether to work with the military on a project with potentially dangerous or risk AI applications should be asking:

1. Is it possible to create strong and binding international institutions or agreements that define acceptable military uses and limitations in the use of AI? While this is not an easy task, the current lack of such structures is troubling. There are serious and potentially destabilizing impacts from deploying AI in any military setting not clearly governed by settled rules of war. The use of AI in potential target identification processes is one clear category of uses that must be governed by law.
2.Is there a robust process for studying and mitigating the safety and geopolitical stability problems that could result from the deployment of military AI? Does this process apply before work commences, along the development pathway and after deployment? Could it incorporate the sufficient expertise to address subtle and complex technical problems? And would those leading the process have sufficient independence and authority to ensure that it can check companies' and military agencies' decisions?
3.Are the contracting agencies willing to commit to not using AI for autonomous offensive weapons? Or to ensuring that any defensive autonomous systems are carefully engineered to avoid risks of accidental harm or conflict escalation? Are present testing and formal verification methods adequate for that task?
4.Can there be transparent, accountable oversight from an independently constituted ethics board or similar entity with both the power to veto aspects of the program and the power to bring public transparency to issues where necessary or appropriate? For example, while Alphabet's AI-focused subsidiary DeepMind has committed to independent ethics review, we are not aware of similar commitments from Google itself. Given this letter, we are concerned that the internal transparency, review, and discussion of Project Maven inside Google was inadequate. Any project review process must be transparent, informed, and independent. While it remains difficult to ensure that that is the case, without such independent oversight, a project runs real risk of harm.
Google

Google Turns To Users To Improve Its AI Chops Outside the US (wired.com) 24

Google is betting that algorithms that understand images and text will draw business to its cloud services, make augmented reality popular, and prompt us to search using our smartphone cameras. From a report: The search company's machine learning systems work best on material from a few rich parts of the world, like the US. They stumble more frequently on data from less affluent countries -- particularly emerging economies like India that Google is counting on to maintain its growth. "We have a very sparse training data set from parts of the world that are not the United States and Western Europe," says Anurag Batra, a researcher at Google.

When Batra travels to his native Delhi, he says Google's AI systems become less smart. Now, he leads a project trying to change that. "We can understand pasta very well, but if you ask about pesarattu dosa, or anything from Korea or Vietnam, we're not very good," Batra says. To fix the problem, Batra is tapping the brains and phones of some of Google's billions of users. His team built an app called Crowdsource that asks people to perform quick tasks like checking the accuracy of Google's image-recognition and translation algorithms. Starting this week, the Crowdsource app also asks users to take and upload photos of nearby objects.

AI

Computer Searches Telescope Data For Evidence of Distant Planets (mit.edu) 12

As part of an effort to identify distant planets hospitable to life, NASA has established a crowdsourcing project in which volunteers search telescopic images for evidence of debris disks around stars, which are good indicators of exoplanets. From a report: Using the results of that project, researchers at MIT have now trained a machine-learning system to search for debris disks itself. The scale of the search demands automation: There are nearly 750 million possible light sources in the data accumulated through NASA's Wide-Field Infrared Survey Explorer (WISE) mission alone. In tests, the machine-learning system agreed with human identifications of debris disks 97 percent of the time. The researchers also trained their system to rate debris disks according to their likelihood of containing detectable exoplanets. In a paper describing the new work in the journal Astronomy and Computing, the MIT researchers report that their system identified 367 previously unexamined celestial objects as particularly promising candidates for further study.
Microsoft

Microsoft: We'll Help Customers Create Patents But We Get a License To Use Them (zdnet.com) 52

Microsoft outlined a new intellectual-property policy on Thursday for co-developed technology that embraces open source and seeks to assure customers it won't run off with their innovations. From a report: The shared innovation principles build on its Azure IP Advantage program for helping customers combat patent trolls. The new principles for co-developed innovation cover ownership of existing technology, customer ownership of new patents, support for open source, licensing new IP back to Microsoft, software portability, transparency, and learning. Microsoft president Brad Smith says the principles aim to assuage customers' fears that Microsoft may end up using co-developed technology to rival them.

[...] In return, Microsoft gets to license back any of the patents in the new technology but promises to limit their use to improving its own platform technologies, such as Azure, Azure AI services, Office 365, Windows, Xbox, and HoloLens. It also reserves the right to use "code and tools developed by or on behalf of Microsoft that are intended to provide technical assistance to customers in their respective businesses."

AI

AI Experts Boycott South Korean University Over 'Killer Robots' (bbc.com) 73

An anonymous reader shares a report: Leading AI experts have boycotted a South Korean university over a partnership with weapons manufacturer Hanwha Systems. More than 50 AI researchers from 30 countries signed a letter expressing concern about its plans to develop artificial intelligence for weapons. In response, the university said it would not be developing "autonomous lethal weapons." The boycott comes ahead of a UN meeting to discuss killer robots. Shin Sung-chul, president of the Korea Advanced Institute of Science and Technology (Kaist), said: "I reaffirm once again that Kaist will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control. Kaist is significantly aware of ethical concerns in the application of all technologies including artificial intelligence."

Slashdot Top Deals