eldavojohn writes "PBS's NewsHour interviewed Geoffrey Donovan on his recent research published in the American Journal of Preventative Medicine that noted a correlation between trees (at least the 22 North American ash varieties) and human health: 'Well my basic hypothesis was that trees improve people's health. And if that's true, then killing 100 million of them in 10 years should have an effect. So if we take away these 100 million trees, does the health of humans suffer? We found that it does.' The basis of this research is Agrilus planipennis, the emerald ash borer, has systematically destroyed 100 million trees in the eastern half of the United States since 2002. After accounting for all variables, the research found that an additional 15,000 people died from cardiovascular disease and 6,000 more from lower respiratory disease in the 15 states infected with the bug, compared with uninfected areas of the country. While the exact cause and effect remains unknown, this research appears to be reinforcing data for people who regularly enjoy forest bathing as well as providing evidence that the natural environment provides major public health benefits."
Make a difference in your data center. Sign up for SlashDataCenter Update newsletter now.
mikejuk writes "The BBC home page has just lost its clock because the BBC Trust upheld a complaint that it was inaccurate. The clock would show the current time on the machine it was being viewed on and not an accurate time as determined by the BBC. However, the BBC have responded to the accusations of inaccuracy by simply removing the clock stating that it would take 100 staffing days to fix. It further says: 'Given the technical complexities of implementing an alternative central clock, and the fact that most users already have a clock on their computer screen, the BBC has taken the decision to remove the clock from the Homepage in an upcoming update.' They added, '...the system required to do this "would dramatically slow down the loading of the BBC homepage", something which he said was "an issue of great importance to the site's users". Secondly, if the site moved to a format in which users across the world accessed the same homepage, irrespective of whichever country they were in, it would be "impossible to offer a single zonally-accurate clock."'"
hypnosec writes "Security expert Tavis Ormandy has discovered a vulnerability in the Windows kernel which, when exploited, would allow an ordinary user to obtain administrative privileges of the system. Google's security pro posted the details of the vulnerability back in May through the Full Disclosure mailing list rather than reporting it to Microsoft first. He has now gone ahead and published a working exploit. This is not the first instance where Ormandy has opted for full disclosure without first informing the vendor of the affected software."
Yesterday, we mentioned a just-approved effort to uncover the remains of goods dumped by Atari in New Mexico decades ago. New submitter Essellion writes "Among the games that legend has it are there is the Atari 2600 E.T. game, infamous for how bad it was. However, an excavator of another kind has cast doubts on how bad it was by exploring in depth the E.T. ROM, how it played and why, and designing some bug fixes for it."
The Next Web reports that GitHub — home to many open source projects — suffered (and quickly recovered from) a service outage this morning, starting around 14:00 UTC. Other than that the problem "appears to have been caused by its database server," the cause isn't clear.
An anonymous reader writes "The presence of 0-day vulnerability exploitation is often a real and considerable threat to the Internet — particularly when very popular consumer-level software is the target. Google's stance on a 60day turnaround of vulnerability fixes from discovery, and a 7-day turnaround of fixes for actively exploited unpatched vulnerabilities, is rather naive and devoid of commercial reality. As a web services company it is much easier for Google to develop and roll out fixes promptly — but for 95+% of the rest of the world's software development companies making thick-client, server and device-specific software this is unrealistic. Statements like these from Google clearly serve their business objectives. As predominantly a web services company with many of the world's best software engineers and researchers working for them. One could argue that Google's applications and software should already be impervious to vulnerabilities (i.e. they should have discovered them themselves through internal QA processes) — rather than relying upon external researchers and bug hunters stumbling over them."
First time accepted submitter Emmanuel Cecchet writes "Researchers of the BenchLab project at UMass Amherst have discovered a bug in the browser of the Samsung S3. If you browse a Web page that has multiple versions of the same image (for mobile, tablet, desktop, etc...) like most Wikipedia pages for example, instead of downloading one image at the right resolution, the phone will download all versions of it. A page that should be less than 100K becomes multiple MB! It looks like a bug in the implementation of the srcset HTML tag, but all the details are in the paper to be presented at the IWQoS conference next week. So far Samsung didn't acknowledge the problem though it seems to affect all S3 phones. You'd better have an unlimited data plan if you browse Wikipedia on an S3!"
Trailrunner7 writes "Bug bounty programs have been a boon for both researchers and the vendors who sponsor them. From the researcher's perspective, having a lucrative outlet for the work they put in finding vulnerabilities is an obvious win. Many researchers do this work on their own time, outside of their day jobs and with no promise of financial reward. The willingness of vendors such as Google, Facebook, PayPal, Barracuda, Mozilla and others to pay significant amounts of money to researchers who report vulnerabilities to them privately has given researchers both an incentive to find more vulnerabilities and a motivation to not go the full disclosure route. This set of circumstances could be an opportunity for the federal government to step in and create its own separate bug reward program to take up the slack. Certain government agencies already are buying vulnerabilities and exploits for offensive operations. But the opportunity here is for an organization such as US-CERT, a unit of the Department of Homeland Security, to offer reasonably significant rewards for vulnerability information to be used for defensive purposes. There are a large number of software vendors who don't pay for vulnerabilities, and many of them produce applications that are critical to the operation of utilities, financial systems and government networks. DHS has a massive budget–a $39 billion request for fiscal 2014–and a tiny portion of that allocated to buy bugs from researchers could have a significant effect on the security of the nation's networks. Once the government buys the vulnerability information, it could then work with the affected vendors on fixes, mitigations and notifications for customers before details are released."
dargaud writes "Mark Shuttleworth of Ubuntu fame has closed the primal bug on Launchpad, standing since 2004 and titled 'Microsoft has a majority market share,' due to the 'changing realities' of tablets, smartphones, and wearable computing."
twoheadedboy writes "Nasdaq has been fined $10 million by the U.S. Securities and Exchange Commission over 'poor systems and decision-making' during the Facebook initial public offering. When Facebook went public on 18 May 2012, it was hoping for a major success, but technical glitches and poor decision making at Nasdaq caused real problems. The SEC said 'a design limitation' in the system to match IPO buy and sell orders was at the root of the disruption, thought to have cost investors $500 million. Orders failed to register properly, leaving banks like Citigroup and UBS in the lurch and making additional, unnecessary bids. They may still win money back from Nasdaq if legal challenges go their way."
itwbennett writes "In follow-up to 17-year old Robert Kugler's claim that PayPal denied him a bug bounty because he was under 18, the company now says that it is 'investigating whether it can lower the qualifying age for vulnerability rewards for those who responsibly report security problems.' The company also said that the vulnerability had already been reported by another researcher — although they didn't mention that in the email to Kugler telling him he wouldn't be receiving payment."
itwbennett writes "You have to be 18 to qualify for PayPal's bug bounty program, a minor detail that 17-year old Robert Kugler found out the hard way after being denied a reward for a website bug he reported. Curiously, the age guideline isn't in the terms and conditions posted on the PayPal website. Kugler was informed by email that he was disqualified because of his age."
Zothecula writes "Robots are getting down to the size of insects, so it seems only natural that they should be getting insect eyes. A consortium of European researchers has developed the artificial Curved Artificial Compound Eye (CurvACE) which reproduces the architecture of the eyes of insects and other arthropods. The aim isn't just to provide machines with an unnerving bug-eyed stare, but to create a new class of sensors that exploit the wide field of vision and motion detecting properties of the compound eye."
An anonymous reader writes "I run a small software consulting company who outsources most of its work to contractors. I market myself as being able to handle any technical project, but only really take the fun ones, then shop it around to developers who are interested. I write excellent product specs, provide bug tracking & source control and in general am a programming project manager with empathy for developers. I don't ask them to work weekends and I provide detailed, reproducible bug reports and I pay on time. The only 'rule' (if you can call it that) is: I do not pay for bugs. Developers can make more work for themselves by causing bugs, and with the specifications I write there is no excuse for not testing their code. Developers are always fine with it until we get toward the end of a project and the customer is complaining about bugs. Then all of a sudden I am asking my contractors to work for 'free' and they can make more money elsewhere. Ugh. Every project ends up being a battle, so, I think the solution is to finally hire someone full-time and pay for everything (bugs or not) and just keep them busy. But how can I make that transition? The guy I'd need to hire would have to know a lot of languages and be proficient in all of them. Plus, I can't afford to pay someone $100k/year right now. Ideas?"
ASDFnz writes "It has been just over two months since the bitcoin block chain was rocked by a near disastrous fork causing the bitcoin price to crash. The culprit of the crash was found to be a bug that prevented pre version 7.1 bitcoin clients accepting large blocks that could be generated by version 8 clients. A temporary fix was put into place by Bitcoin Project lead developer Gavin Andresen that forced version 8 clients to generate blocks that version 7.1 could understand. It is important to note though, the fix was a temporary one! In just under two days on the 15th of May the fix will expire and version 8 clients will once again be able to make large blocks that older clients will not be able to understand."
An anonymous reader writes "The author of this article goes over a format string vulnerability he found in The Elder Scrolls series starting with Morrowind and going all the way up to Skyrim. It's not something that will likely be exploited, but it's interesting that the vulnerability has lasted through a decade of games. 'Functions like printf() and its variants allow us to view and manipulate the program’s running stack frame by specifying certain format string characters. By passing %08x.%08x.%08x.%08x.%08x, we get 5 parameters from the stack and display them in an 8-digit padded hex format. The format string specifier ‘%s’ displays memory from an address that is supplied on the stack. Then there’s the %n format string specifier – the one that crashes applications because it writes addresses to the stack. Powerful stuff.'"
Mobile photo-sharing app SnapChat has one claim to fame, compared to other ways people might share photos from their cellphones: the photos, once viewed, disappear from view, after a pre-set length of time. However, it turns out they don't disappear as thoroughly as users might like. New submitter nefus writes with this excerpt from Forbes: "Richard Hickman of Decipher Forensics found that it's possible to pull Snapchat photos from Android phones simply by downloading data from the phone using forensics software and removing a '.NoMedia' file extension that was keeping the photos from being viewed on the device. He published his findings online and local TV station KSL has a video showing how it's done."
Nerval's Lobster writes "Online economies come with their own issues. Case in point is the Auction House for Diablo III, a massively multiplayer game in which players can pay for items in either in-game gold or real-world dollars. Thanks to a bug in the game's latest patch, players could generate massive amounts of virtual gold with little effort, which threatened to throw the in-game economy seriously out of whack. Diablo series publisher Blizzard took corrective steps, but the bug has already attracted a fair share of buzz on gaming and tech-news forums. 'We're still in the process of auditing Auction House and gold trade transactions,' read Blizzard's note on the Battle.net forums. 'We realize this is an inconvenience for many of our players, and we sincerely apologize for the interruption of the service. We hope to have everything back up as soon as possible.' Blizzard was unable to offer an ETA for when the Auction House would come back. 'We'll continue to provide updates in this thread as they become available.' Diablo's gold issue brings up (however tangentially) some broader issues with virtual currencies, namely the bugs and workarounds that can throw an entire micro-economy out of whack. But then again, 'real world' markets have their own software-related problems: witness Wall Street's periodic 'flash crashes' (caused, many believe, by the rise of ultra-high-speed computer trading)." It seems likely the gold duping was due to a simple integer overflow bug. A late change added to the patch allowed users to sell gold on the Real Money Auction House in stacks of 10 million rather than stacks of 1 million. On the RMAH, there exists both a cap ($250) and a floor ($0.25) for the value of auctions. With stacks of 1 million and a floor of $0.25, a seller could only enter 1 billion gold (1,000 stacks) while staying under the $250 cap. When the gold stack size increased, the value of gold dropped significantly. At $0.39 per 10 million, a user could enter values of up to 6.4 billion gold at a time. Unfortunately, the RMAH wasn't designed to handle gold numbers above 2^31, or 2,147,483,648 gold. Creating the auction wouldn't remove enough gold, but canceling it would return the full amount.
FuzzNugget writes "According to Wired, the two CFAA charges that were laid against the man who exploited a software bug on a video poker machine have been officially dismissed. Says Wired: '[U.S. District Judge Miranda] Du had asked prosecutors to defend their use of the federal anti-hacking law by Wednesday, in light of a recent 9th Circuit ruling that reigned in the scope of the CFAA. The dismissal leaves John Kane, 54, and Andre Nestor, 41, facing a single remaining charge of conspiracy to commit wire fraud.' Kane's lawyer agreed, stating, 'The case never should have been filed under the CFAA, it should have been just a straight wire fraud case. And I'm not sure its even a wire fraud. I guess we'll find out when we go to trial.'"
An anonymous reader writes "A new report details the analysis of more than 450 million lines of software through the Coverity Scan service, which began as the largest public-private sector research project focused on open source software integrity, and was initiated between Coverity and the U.S. Department of Homeland Security in 2006. Code quality for open source software continues to mirror that of proprietary software — and both continue to surpass the industry standard for software quality. Defect density (defects per 1,000 lines of software code) is a commonly used measurement for software quality. The analysis found an average defect density of .69 for open source software projects, and an average defect density of .68 for proprietary code."