If they announce a testing… They are long down the road of practical usage!
We don’t have X-wing fighters just yet, but we may soon have their laser weapons. DARPA is working on a system that’s downright Lucasian.
A prototype laser turret attached to a test aircraft. There were no reports of actual pew pew!Air Force Research Laboratory
Prepare yourself for a future filled with real-life pew pew! The Defense Advanced Research Projects Agency is working with Lockheed Martin to test “a new beam control turret… to give 360-degree coverage for high-energy laser weapons operating on military aircraft.”
In other words, it stuck a primitive (by rebel standards) “Star Wars”-style laser cannon on a fighter jet and flew it over Michigan eight times.
“These initial flight tests validate the performance of our ABC turret design,” Lockheed’s Doug Graham said. in a release.
The test flights demonstrated the airworthiness of the turret, but it doesn’t appear that anyone or anything in the Great Lakes region was actually zapped as part of testing.
Still, this represents a significant move toward the inevitable merging of the “Star Wars” universe with our own so-called “reality.” We’ve already seen the Navy’s laser weapon that’s set to deploy, and science has discovered how to create a real-life lightsaber, so perhaps it would be wise to start scanning the galaxies not just for potentiallyhabitable exoplanets, but for planet-size super weapons as well.
And how will SGR-A1 know if you are friendly or not? It will read your microchip. (Expect this announcement after several “friendly fire incidents)
A Samsung Group subsidiary has worked on a robot sentry that they call the SGR-A1, and this particular robot will carry a fair amount of weapons that ought to make you think twice about crossing the borders of South Korea illegally – as it has been tested out at the demilitarized zone along the border over with its neighbor, North Korea. The SGR-A1 will be able to detect intruders with the help of machine vision (read: cameras), alongside a combination of heat and motion sensors.
The whole idea of the Samsung SGR-A1 is to let this military robot sentry do the work of its human counterparts over at the demilitarized zone at the South and North Korea border, so that there will be a minimal loss of life on the South Korean side just in case things turn sour between the two neighbors.
First announced in 2006 (where obvious improvements have been made since, and I am not surprised if much of it remained as classified information), this $200,000, all weather, 5.56 mm robotic machine gun also sports an optional grenade launcher. It will make use of its IR and visible light cameras to track multiple targets and remains under the control of a human operator from a remote location. Basically, it claims to be able to “identify and shoot a target automatically from over two miles (3.2 km) away.” Scary! When used on the DMZ, this robot will not distinguish between friend or foe – anyone who crosses the line is deemed as an enemy.
The U.S. government threatened to fine Yahoo $250,000 a day in 2008 if it failed to comply with a broad demand for user data that the company believed was unconstitutional, according to court documents unsealed Thursday. (Justin Sullivan/Getty Images)
The U.S. government threatened to fine Yahoo $250,000 a day in 2008 if it failed to comply with a broad demand to hand over user communications — a request the company believed was unconstitutional — according to court documents unsealed Thursday that illuminate how federal officials forced American tech companies to participate in the National Security Agency’s controversial PRISM program.
The documents, roughly 1,500 pages worth, outline a secret and ultimately unsuccessful legal battle by Yahoo to resist the government’s demands. The company’s loss required Yahoo to become one of the first to begin providing information to PRISM, a program that gave the NSA extensive access to records of online communications by users of Yahoo and other U.S.-based technology firms.
The ruling by the Foreign Intelligence Surveillance Court of Review became a key moment in the development of PRISM, helping government officials to convince other Silicon Valley companies that unprecedented data demands had been tested in the courts and found constitutionally sound. Eventually, most major U.S. tech companies, including Google, Facebook, Apple and AOL, complied. Microsoft had joined earlier, before the ruling, NSA documents have shown.
A version of the court ruling had been released in 2009 but was so heavily redacted that observers were unable to discern which company was involved, what the stakes were and how the court had wrestled with many of the issues involved.
“We already knew that this was a very, very important decision by the FISA Court of Review, but we could only guess at why,” said Stephen Vladeck, a law professor at American University.
PRISM was first revealed by former NSA contractor Edward Snowden last year, prompting intense backlash and a wrenching national debate over allegations of overreach in government surveillance.
Documents made it clear that the program allowed the NSA to order U.S.-based tech companies to turn over e-mails and other communications to or from foreign targets without search warrants for each of those targets. Other NSA programs gave even more wide-ranging access to personal information of people worldwide, by collecting data directly from fiber-optic connections.
In the aftermath of the revelations, the companies have struggled to defend themselves against accusations that they were willing participants in government surveillance programs — an allegation that has been particularly damaging to the reputations of these companies overseas, including in lucrative markets in Europe.
Yahoo, which endured heavy criticism after The Washington Post and Britain’s Guardian newspaper used Snowden’s documents to reveal the existence of PRISM last year, was legally bound from revealing its efforts in attempting to resist government pressure. The New York Times first reported Yahoo’s role in the case in June 2013, a week after the initial PRISM revelations.
Both the Foreign Intelligence Surveillance Court and the Foreign Intelligence Surveillance Court of Review, an appellate court, ordered declassification of the case last year, amid a broad effort to make public the legal reasoning behind NSA programs that had stirred national and international anger. Judge William C. Bryson, presiding judge of the Foreign Intelligence Surveillance Court of Review, ordered the documents from the legal battle unsealed Thursday. Documents from the case in the lower court have not been released.
Yahoo hailed the decision in a Tumblr post Thursday afternoon. “The released documents underscore how we had to fight every step of the way to challenge the U.S. Government’s surveillance efforts,” Ron Bell, the company’s general counsel, wrote in the post.
The Justice Department and the Office of the Director of National Intelligence published their own Tumblr post Thursday evening offering a detailed description of the court proceedings and posting several related documents. It noted that both the Foreign Intelligence Surveillance Court and the appeals court sided with the government on the main questions at issue, and added that a subsequent law added more protections, making it “even more protective of the Fourth Amendment rights of U.S. persons than the statute upheld by the [appeals court] as constitutional.”
At issue in the original court case was a recently passed law, the Protect America Act of 2007, that allowed the government to collect data for significant foreign intelligence purposes on targets “reasonably believed” to be outside of the United States. Individual search warrants were not required for each target. That law has lapsed but became the foundation for the FISA Amendments Act of 2008, which created the legal authority for some of the NSA programs later revealed by Snowden.
The order requiring data from Yahoo came in 2007, soon after the Protect America Act passed. It set off alarms at the company because it sidestepped the traditional requirement that each target be subject to court review before surveillance could begin. The order also went beyond “metadata” — records of communications but not their actual content — to include the full e-mails.
A government filing from February 2008 described the order to Yahoo as including “certain types of communications while those communications are in transmission.” It also made clear that while this was intended to target people outside the United States, there inevitably would be “incidental collection” of the communications of Americans. The government promised “stringent minimization procedures to protect the privacy interests of United States persons.”
Rather than immediately comply with the sweeping order, Yahoo sued.
Central to the case was whether the Protect America Act overstepped constitutional bounds, particularly the Fourth Amendment prohibition on unreasonable searches and seizures without a warrant. An early Yahoo filing said the case was “of tremendous national importance. The issues at stake in this litigation are the most serious issues that this Nation faces today — to what extent must the privacy rights guaranteed by the United States Constitution yield to protect our national security.”
The appeals court, however, ruled that the government had put in place adequate safeguards to avoid constitutional violations.
“We caution that our decision does not constitute an endorsement of broad-based, indiscriminate executive power,” the court wrote on Aug. 22, 2008. “Rather, our decision recognizes that where the government has instituted several layers of serviceable safeguards to protect individuals against unwarranted harms and to minimize incidental intrusions, its efforts to protect national security should not be frustrated by the courts. This is such a case.”
The government threatened Yahoo with the $250,000-a-day fine after the company had lost an initial round before the Foreign Intelligence Surveillance Court but was still pursuing an appeal. Faced with the fine, Yahoo began complying with the legal order as it continued with the appeal, which it lost several months later.
Stewart Baker, a former NSA general counsel and Bush administration Department of Homeland Security official, said it’s not unusual for courts to order compliance with rulings while appeals continue before higher courts.
“I’m always astonished how people are willing to abstract these decisions from the actual stakes,” Baker said. “We’re talking about trying to gather information about people who are trying to kill us and who will succeed if we don’t have robust information about their activities.”
The American Civil Liberties Union applauded Thursday’s move to release the documents but said it was long overdue.
“The public can’t understand what a law means if it doesn’t know how the courts are interpreting that law,” said Patrick Toomey, a staff attorney with the ACLU’s National Security Project.
Several high-profile websites — including Kickstarter, Etsy, Reddit, Mozilla, and Meetup — will display spinning-wheel icons on Wednesday in an attempt to show visitors the Internet slow lanes they say will appear if the U.S. Federal Communications Commission doesn’t pass strong Net neutrality regulations.
The symbolic Internet slowdown will include the dreaded site-loading spinning icon to symbolize what Net neutrality advocates believe the Web could look like without strong rules. Participating sites, which won’t really slow down their load times, will encourage visitors to call or email U.S. policymakers in support of Net neutrality rules.
(CNN) — Hundreds of children across the United States have been hospitalized with a serious respiratory illness. Scientists say they believe the bug to blame is Enterovirus D68, also known as EV-D68.
Enteroviruses are common, especially in September, but this particular type is not. There have been fewer than 100 cases recorded since it was identified in the 1960s, according to the Centers for Disease Control and Prevention.
WASHINGTON (CBS) – Teflon tape, molded plastic explosives and handguns are all concealment tricks that a group of researchers were able to pull off on the Rapiscan Secure 1000 machines previously used at TSA checkpoints and currently used at courthouses, prisons and other government security stops.
Researchers from the University of California, San Diego, the University of Michigan and Johns Hopkins University maneuvered weapons past the full-body X-ray scanners that were deployed at U.S. airports between 2009 and 2013 – at a cost of more than $1 billion.
“Frankly, we were shocked by what we found,” said J. Alex Halderman, a professor of computer science at the University of Michigan, in a statement. “A clever attacker can smuggle contraband past the machines using surprisingly low-tech techniques.”
George W. Bush Takes ‘Ice Bucket Challenge’
Rapiscan Systems labels the Secure 1000 machines as “the most effective and most widely deployed image-based people screening solution,” although the scanners were removed from TSA airport checkpoints last year because of privacy complaints stemming from the near-naked images it produced of passengers.
But the study authors say that the machines have been transferred to government buildings, jails and courthouses across the country.
The researchers were able to conceal a .380 ACP pistol and plastic explosives from the full-body X-ray scanners in addition to installing malware to produce fake “all-clear” images. They were also able to pull off a series of weapon concealment tricks, including the use of Teflon tape to conceal weapons against a person’s spine. In one test, a 200 gram pancake of plastic explosive-like material was molded to a passenger’s torso to avoid detection.
Another scanner image failed to reveal a pistol hidden behind a person’s knee and a pistol that was sewn into a pant leg. A knife and the C-4 explosive simulator material were also invisible to the scanners.
Pet-Friendly Dating Sites Match Up People, Pooches
The scanning operator sees no difference between test images with and without the weapons and explosive material.
Another troubling element of the machine’s vulnerability is the ease in which the researchers were able to even test it in the first place. They purchased the government surplus scanner from eBay.
In a statement, UC San Diego computer scientist Hovav Shacham said, “The (scanner’s) designers seem to have assumed that attackers would not have access to a Secure 1000 to test and refine their attacks.”
“These machines were tested in secret, presumably without this kind of adversarial mindset, thinking about how an attacker would adapt to the techniques being used,” Halderman told Wired, prior to a research presentation at the Usenix Security Conference on Thursday. “They might stop a naive attacker. But someone who applied just a bit of cleverness to the problem would be able to bypass them. And if they had access to a machine to test their attacks, they could render their ability to detect contraband virtually useless.”
In 2012, TSA cautioned reporters from citing a video produced by blogger Jonathan Corbett that showed TSA’s Rapiscan full-body scanners being duped by a series of simple weapon concealment tricks.
Not that your phones and Computers cant be killed already (hello) – this just codifies it. (makes it “legal”) – This will likely be used to shut down protests, and dissent.
A California bill that would require cellphone makers to install a “kill switch” to render stolen devices inoperable has passed the state legislature, and now moves to the governor’s office for consideration.
The bill won Senate approval Monday by a vote of 27-8. If Gov. Jerry Brown signs the bill, it would be among the first such laws in the nation (Minnesota has adopted a similar anti-theft requirement).
Read MoreGoogle, Microsoft adding ‘kill switch’
An earlier version of the “kill switch” bill died in the Senate this spring, amid criticism that its language was so broad it would have imposed the requirement on a number of devices beyond smartphones.
TommL | Vetta | Getty Images
Several device manufacturers and wireless carriers withdrew their opposition once the bill was amended to exclude tablets and exempt smartphone models introduced before Jan. 1, 2015, that could not “reasonably be re-engineered” to incorporate the anti-theft technology.
If the bill is signed into law, manufacturers will have until July 1, 2015, to incorporate the theft deterrent, which users would be asked to turn on when they set up their new devices.
Read MorePush is on to get ‘kill switch’ into smartphones
State Sen. Mark Leno, D-San Francisco, introduced the bill to address the epidemic of smartphone thefts, which the Federal Communications Commission estimates to account for 30 percent to 40 percent of thefts in major cities.
In San Francisco, more than half of all robberies in 2012 involved the theft of a mobile device, according to the city district attorney’s office.
“Our goal is to swiftly take the wind out of the sails of thieves who have made the theft of smartphones one of the most prevalent street crimes in California’s big cities,” Leno said in a statement.
Amid heightened concerns about smartphone theft, several key players in the industry took steps to address the problem ahead of legislation.
The five largest U.S. cellular carriers and key device manufacturers — including Apple, Google and Samsung – pledged to incorporate an anti-theft feature that would remotely wipe data from a smartphone and render it inoperable. Lost or stolen devices could later be restored if recovered.
More from Re/code:
Founders Fund Bets Software Can Lower Self-Insurance Costs
Telstra Buys Video Platform Company Ooyala for $270 Million
Twitter Paid $134 Million for Gnip
Ruben Santamarta of IOActive to provide details at Black Hat conference
*From Last Year’s Conf: Hacker (Barnaby Jack) dies days before he was to reveal how to remotely kill pacemaker patients
Security researcher Barnaby Jack has passed away in San Francisco, only days before a scheduled appearance at a Las Vegas hacker conference where he intended to show how an ordinary pacemaker could be compromised in order to kill a man.
Jack, who previously presented hacks involving ATMs and insulin pumps at the annual Black Hat conference in Vegas, was confirmed dead Friday morning by the San Francisco Medical Examiner’s office, Reuters reported. He passed away Thursday this week, but the office declined to offer any more details at this time.
Jack’s death came one week to the day before he was scheduled to detail one of his most recent exploits in a Black Hat talk called “Implantable Medical Devices: Hacking Humans.”
“I was intrigued by the fact that these critical life devices communicate wirelessly. I decided to look at pacemakers and ICDs (implantable cardioverter defibrillators) to see if they communicated securely and if it would be possible for an attacker to remotely control these devices,” Jack told Vice last month.
In theory, a hacker could use a plane’s onboard WiFi signal or inflight entertainment system to hack into its avionics equipment, potentially disrupting or modifying satellite communications. (Associated Press)
Cybersecurity researcher Ruben Santamarta says he has figured out how to hack the satellite communications equipment on passenger jets through their WiFi and inflight entertainment systems – a claim that, if confirmed, could prompt a review of aircraft security.
‘These devices are wide open.’- Ruben Santamarta, IOActive
Santamarta, a consultant with cybersecurity firm IOActive, is scheduled to lay out the technical details of his research at this week’s Black Hat hacking conference in Las Vegas, an annual convention where thousands of hackers and security experts meet to discuss emerging cyber threats and improve security measures.
His presentation on Thursday on vulnerabilities in satellite communications systems used in aerospace and other industries is expected to be one of the most widely watched at the conference.
“These devices are wide open. The goal of this talk is to help change that situation,” Santamarta, 32, told Reuters.
The researcher said he discovered the vulnerabilities by “reverse engineering” – or decoding – highly specialized software known as firmware, used to operate communications equipment made by Cobham Plc, Harris Corp, EchoStar Corp’s Hughes Network Systems, Iridium Communications Inc and Japan Radio Co Ltd.
In theory, a hacker could use a plane’s onboard WiFi signal or inflight entertainment system to hack into its avionics equipment, potentially disrupting or modifying satellite communications, which could interfere with the aircraft’s navigation and safety systems, Santamarta said.
Hacks tested in controlled environments
He acknowledged that his hacks have only been tested in controlled environments, such as IOActive’s Madrid laboratory, and they might be difficult to replicate in the real world. Santamarta said he decided to go public to encourage manufacturers to fix what he saw as risky security flaws.
Representatives for Cobham, Harris, Hughes and Iridium said they had reviewed Santamarta’s research and confirmed some of his findings, but downplayed the risks.
For instance, Cobham, whose Aviation 700 aircraft satellite communications equipment was the focus of Santamarta’s research, said it is not possible for hackers to use WiFi signals to interfere with critical systems that rely on satellite communications for navigation and safety. The hackers must have physical access to Cobham’s equipment, according to Cobham spokesman Greg Caires.
“In the aviation and maritime markets we serve, there are strict requirements restricting such access to authorized personnel only,” said Caires.
A Japan Radio Co spokesman declined to comment, saying information on such vulnerabilities was not public.
Black Hat, which was founded in 1997, has often been a venue for hackers to present breakthrough research. In 2009, Charlie Miller and Collin Mulliner demonstrated a method for attacking iPhones with malicious text messages, prompting Apple Inc to release a patch. In 2011, Jay Radcliffe demonstrated methods for attacking Medtronic Inc’s insulin pumps, which helped prompt an industry review of security.
Santamarta published a 25-page research report in April that detailed what he said were multiple bugs in firmware used in satellite communications equipment made by Cobham, Harris, Hughes, Iridium and Japan Radio Co for a wide variety of industries, including aerospace, military, maritime transportation, energy and communications.
The report laid out scenarios by which hackers could launch attacks, though it did not provide the level of technical details that Santamarta said he will disclose at Black Hat.
Risk ‘very small’
Harris spokesman Jim Burke said the company had reviewed Santamarta’s paper. “We concluded that the risk of compromise is very small,” he said.
Iridium spokesman Diane Hockenberry said, “We have determined that the risk to Iridium subscribers is minimal, but we are taking precautionary measures to safeguard our users.”
One vulnerability that Santamarta said he found in equipment from all five manufacturers was the use of “hardcoded” log-in credentials, which are designed to let service technicians access any piece of equipment with the same login and password.
The problem is that hackers can retrieve those passwords by hacking into the firmware, then use the credentials to access sensitive systems, Santamarta said.
Hughes spokeswoman Judy Blake said hardcoded credentials were “a necessary” feature for customer service. The worst a hacker could do is to disable the communication link, she said.
Santamarta said he will respond to the comments from manufacturers during his presentation, then take questions during an open Q&A session after his talk.
Vincenzo Iozzo, a member of Black Hat’s review board, said Santamarta’s paper marked the first time a researcher had identified potentially devastating vulnerabilities in satellite communications equipment.
“I am not sure we can actually launch an attack from the passenger inflight entertainment system into the cockpit,” he said. “The core point is the type of vulnerabilities he discovered are pretty scary just because they involve very basic security things that vendors should already be aware of.”
New Rehash of old news… But if you haven’t seen it, its worth remembering! MSM confirms!
• Secret deal places no legal limits on use of data by Israelis
• Only official US government communications protected
• Agency insists it complies with rules governing privacy
• Read the NSA and Israel’s ‘memorandum of understanding’
The agreement for the US to provide raw intelligence data to Israel was reached in principle in March 2009, the document shows. Photograph: James Emery
The National Security Agency routinely shares raw intelligence data withIsrael without first sifting it to remove information about US citizens, a top-secret document provided to the Guardian by whistleblower Edward Snowden reveals.
Details of the intelligence-sharing agreement are laid out in a memorandum of understanding between the NSA and its Israeli counterpart that shows the US government handed over intercepted communications likely to contain phone calls and emails of American citizens. The agreement places no legally binding limits on the use of the data by the Israelis.
The disclosure that the NSA agreed to provide raw intelligence data to a foreign country contrasts with assurances from the Obama administrationthat there are rigorous safeguards to protect the privacy of US citizens caught in the dragnet. The intelligence community calls this process “minimization”, but the memorandum makes clear that the information shared with the Israelis would be in its pre-minimized state.
The deal was reached in principle in March 2009, according to the undated memorandum, which lays out the ground rules for the intelligence sharing.
The five-page memorandum, termed an agreement between the US and Israeli intelligence agencies “pertaining to the protection of US persons”, repeatedly stresses the constitutional rights of Americans to privacy and the need for Israeli intelligence staff to respect these rights.
But this is undermined by the disclosure that Israel is allowed to receive “raw Sigint” – signal intelligence. The memorandum says: “Raw Sigint includes, but is not limited to, unevaluated and unminimized transcripts, gists, facsimiles, telex, voice and Digital Network Intelligence metadataand content.”
According to the agreement, the intelligence being shared would not be filtered in advance by NSA analysts to remove US communications. “NSA routinely sends ISNU [the Israeli Sigint National Unit] minimized and unminimized raw collection”, it says.
Although the memorandum is explicit in saying the material had to be handled in accordance with US law, and that the Israelis agreed not to deliberately target Americans identified in the data, these rules are not backed up by legal obligations.
“This agreement is not intended to create any legally enforceable rights and shall not be construed to be either an international agreement or a legally binding instrument according to international law,” the document says.
In a statement to the Guardian, an NSA spokesperson did not deny that personal data about Americans was included in raw intelligence data shared with the Israelis. But the agency insisted that the shared intelligence complied with all rules governing privacy.
“Any US person information that is acquired as a result of NSA’ssurveillance activities is handled under procedures that are designed to protect privacy rights,” the spokesperson said.
The NSA declined to answer specific questions about the agreement, including whether permission had been sought from the Foreign Intelligence Surveillance (Fisa) court for handing over such material.
The memorandum of understanding, which the Guardian is publishing in full, allows Israel to retain “any files containing the identities of US persons” for up to a year. The agreement requests only that the Israelis should consult the NSA’s special liaison adviser when such data is found.
Notably, a much stricter rule was set for US government communications found in the raw intelligence. The Israelis were required to “destroy upon recognition” any communication “that is either to or from an official of the US government”. Such communications included those of “officials of the executive branch (including the White House, cabinet departments, and independent agencies), the US House of Representatives and Senate (member and staff) and the US federal court system (including, but not limited to, the supreme court)”.
It is not clear whether any communications involving members of US Congress or the federal courts have been included in the raw data provided by the NSA, nor is it clear how or why the NSA would be in possession of such communications. In 2009, however, the New York Times reported on “the agency’s attempt to wiretap a member of Congress, without court approval, on an overseas trip”.
The NSA is required by law to target only non-US persons without an individual warrant, but it can collect the content and metadata of Americans’ emails and calls without a warrant when such communication is with a foreign target. US persons are defined in surveillance legislation as US citizens, permanent residents and anyone located on US soil at the time of the interception, unless it has been positively established that they are not a citizen or permanent resident.
Moreover, with much of the world’s internet traffic passing through US networks, large numbers of purely domestic communications also get scooped up incidentally by the agency’s surveillance programs.
The document mentions only one check carried out by the NSA on the raw intelligence, saying the agency will “regularly review a sample of files transferred to ISNU to validate the absence of US persons’ identities”. It also requests that the Israelis limit access only to personnel with a “strict need to know”.
Israeli intelligence is allowed “to disseminate foreign intelligence information concerning US persons derived from raw Sigint by NSA” on condition that it does so “in a manner that does not identify the US person”. The agreement also allows Israel to release US person identities to “outside parties, including all INSU customers” with the NSA’s written permission.
Although Israel is one of America’s closest allies, it is not one of the inner core of countries involved in surveillance sharing with the US – Britain, Australia, Canada and New Zealand. This group is collectively known asFive Eyes.
The relationship between the US and Israel has been strained at times, both diplomatically and in terms of intelligence. In the top-secret 2013 intelligence community budget request, details of which were disclosed by the Washington Post , Israel is identified alongside Iran and China as a target for US cyberattacks.
While NSA documents tout the mutually beneficial relationship of Sigintsharing, another report, marked top secret and dated September 2007, states that the relationship, while central to US strategy, has become overwhelmingly one-sided in favor of Israel.
“Balancing the Sigint exchange equally between US and Israeli needs has been a constant challenge,” states the report, titled ‘History of the US – Israel Sigint Relationship, Post-1992′. “In the last decade, it arguably tilted heavily in favor of Israeli security concerns. 9/11 came, and went, with NSA’s only true Third Party [counter-terrorism] relationship being driven almost totally by the needs of the partner.”
In another top-secret document seen by the Guardian, dated 2008, a senior NSA official points out that Israel aggressively spies on the US. “On the one hand, the Israelis are extraordinarily good Sigint partners for us, but on the other, they target us to learn our positions on Middle East problems,” the official says. “A NIE [National Intelligence Estimate] ranked them as the third most aggressive intelligence service against the US.”
Later in the document, the official is quoted as saying: “One of NSA’s biggest threats is actually from friendly intelligence services, like Israel. There are parameters on what NSA shares with them, but the exchange is so robust, we sometimes share more than we intended.”
The memorandum of understanding also contains hints that there had been tensions in the intelligence-sharing relationship with Israel. At a meeting in March 2009 between the two agencies, according to the document, it was agreed that the sharing of raw data required a new framework and further training for Israeli personnel to protect US personinformation.
It is not clear whether or not this was because there had been problems up to that point in the handling of intelligence that was found to contain Americans’ data.
However, an earlier US document obtained by Snowden, which discusses co-operating on a military intelligence program, bluntly lists under the cons: “Trust issues which revolve around previous ISR [Israel] operations.”
The Guardian asked the Obama administration how many times US data had been found in the raw intelligence, either by the Israelis or when theNSA reviewed a sample of the files, but officials declined to provide this information. Nor would they disclose how many other countries the NSA shared raw data with, or whether the Fisa court, which is meant to oversee NSA surveillance programs and the procedures to handle US information, had signed off the agreement with Israel.
In its statement, the NSA said: “We are not going to comment on any specific information sharing arrangements, or the authority under which any such information is collected. The fact that intelligence services work together under specific and regulated conditions mutually strengthens the security of both nations.
“NSA cannot, however, use these relationships to circumvent US legal restrictions. Whenever we share intelligence information, we comply with all applicable rules, including the rules to protect US person information.”
Yeah – we all saw Terminator….. Reality may be worse.
Elon Musk, the Tesla and Space-X founder who is occasionally compared to comic book hero Tony Stark, is worried about a new villain that could threaten humanity—specifically the potential creation of an artificial intelligence that is radically smarter than humans, with catastrophic results:
Musk is talking about “Superintelligence: Paths, Dangers, Strategies” by Nick Bostrom of the University of Oxford’s Future of Humanity Institute. The book addresses the prospect of an artificial superintelligence that could feasibly be created in the next few decades. According to theorists, once the AI is able to make itself smarter, it would quickly surpass human intelligence.
What would happen next? The consequences of such a radical development are inherently difficult to predict. But that hasn’t stopped philosophers, futurists, scientists and fiction writers from thinking very hard about some of the possible outcomes. The results of their thought experiments sound like science fiction—and maybe that’s exactly what Elon Musk is afraid of.
AIs: They’re not just like us
“We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans—scientific curiosity, benevolent concern for others, spiritual enlightenment and contemplation, renunciation of material acquisitiveness, a taste for refined culture or for the simple pleasures in life, humility and selflessness, and so forth,” Bostrom has written (pdf, pg. 14). (Keep in mind, as well, that those values are often in short supply among humans.)
“It might be possible through deliberate effort to construct a superintelligence that values such things, or to build one that values human welfare, moral goodness, or any other complex purpose that its designers might want it to serve,” Bolstroms adds. “But it is no less possible—and probably technically easier—to build a superintelligence that places final value on nothing but calculating the decimals of pi.”
And it’s in the ruthless pursuit of those decimals that problems arise.
Artificial intelligences could be created with the best of intentions—to conduct scientific research aimed at curing cancer, for example. But when AIs become superhumanly intelligent, their single-minded realization of those goals could have apocalyptic consequences.
“The basic problem is that the strong realization of most motivations is incompatible with human existence,” Daniel Dewey, a research fellow at the Future of Humanity Institute, said in an extensive interview with Aeon magazine. “An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.”
Put another way by AI theorist Eliezer Yudkowsky of the Machine Intelligence Research Institute: “The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.”
Be careful what you wish for
Say you’re an AI researcher and you’ve decided to build an altruistic intelligence—something that is directed to maximize human happiness. As Ross Anderson of Aeon noted, “an AI might think that human happiness is a biochemical phenomenon. It might think that flooding your bloodstream with non-lethal doses of heroin” is the best way to reach that goal.
Or what if you direct the AI to “protect human life”—nothing wrong with that, right? Except if theAI, vastly intelligent and unencumbered by human conceptions of right and wrong, decides that the best way to protect humans is to physically restrain them and lock them into climate-controlled rooms, so they can’t do any harm to themselves or others? Human lives would be safe, but it wouldn’t be much consolation.
AI Mission Accomplished
James Barrat, the author of “Our Final Invention: Artificial Intelligence and the End of the Human Era,” (another book endorsed by Musk) suggests that AIs, whatever their ostensible purpose, will have a drive for self-preservation and resource acquisition. Barrat concludes that “without meticulous, countervailing instructions, a self-aware, self-improving, goal-seeking system will go to lengths we’d deem ridiculous to fulfill its goals.”
Even an AI custom-built for a specific purpose could interpret its mission to disastrous effect. Here’s Stuart Armstrong of the Future of Humanity Institute in an interview with The Next Web:
Take an anti-virus program that’s dedicated to filtering out viruses from incoming emails and wants to achieve the highest success, and is cunning and you make that super-intelligent. Well it will realize that, say, killing everybody is a solution to its problems, because if it kills everyone and shuts down every computer, no more emails will be sent and and as a side effect no viruses will be sent. This is sort of a silly example but the point it illustrates is that for so many desires or motivations or programmings, “kill all humans” is an outcome that is desirable in their programming.
Even an “oracular” AI could be dangerous
Ok, what if we create a computer that can only answer questions posed to it by humans. What could possibly go wrong? Here’s Dewey again:
Let’s say the Oracle AI has some goal it wants to achieve. Say you’ve designed it as a reinforcement learner, and you’ve put a button on the side of it, and when it gets an engineering problem right, you press the button and that’s its reward. Its goal is to maximize the number of button presses it receives over the entire future.
Eventually the AI—which, remember, is unimaginably smart compared to the smartest humans—might figure out a way to escape the computer lab and make its way into the physical world, perhaps by bribing or threatening a human stooge into creating a virus or a special-purpose nanomachine factory. And then it’s off to the races. Dewey:
Now this thing is running on nanomachines and it can make any kind of technology it wants, so it quickly converts a large fraction of Earth into machines that protect its button, while pressing it as many times per second as possible. After that it’s going to make a list of possible threats to future button presses, a list that humans would likely be at the top of. Then it might take on the threat of potential asteroid impacts, or the eventual expansion of the Sun, both of which could affect its special button.
The dire scenarios listed above are only the consequences of a benevolent AI, or at worst one that’s indifferent to the needs and desires of humanity. But what if there was a malicious artificial intelligence that not only wished to do us harm, but that retroactively punished every person who refused to help create it in the first place?
This theory is a mind-boggler, most recently explained in great detail by Slate, but it goes something like this: An omniscient evil AI that is created at some future date has the ability to simulate the universe itself, along with everyone who has ever lived. And if you don’t help the AIcome into being, it will torture the simulated version of you—and, P.S., we might be living in that simulation already.
This thought experiment was deemed so dangerous by Eliezer “The AI does not love you” Yudkowsky that he has deleted all mentions of it on LessWrong, the website he founded where people discuss these sorts of conundrums. His reaction, as highlighted by Slate, is worth quoting in full:
Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERINGWHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVESTHEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
You have to be really clever to come up with a genuinely dangerous thought.
Next Page »