Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
All Major Browsers Fall at Pwn2Own Day Two (threatpost.com)
488 points by wglb on March 20, 2015 | hide | past | favorite | 270 comments


> Lee wasn’t done and went on to bolster his daily total to $225,000 later in the day

Hooooly shit. That's insane, and if that's what you get from the company, I can't even imagine the price of those exploits on the black market...


It's large when you present it as a single-day payout. If you imagine it might have taken a year of work to discover and exploit these vulnerabilities, with no guarantee of success, then the money is actually relatively low compared to standard tech benefit packages. Tons of respect for anyone who does this and avoids the urge to go to the black market (or NSA, etc).


>225k is standard?

100k income per year + 33% for employer taxes, medical/fringe benefits, etc, so 133k.

1.5 years of work at 133k/yr = ~200k payout.

I wouldn't call that low since it already hinges on massively inflated salaries due to massively inflated cost of living for developers in hubs.

That 100k of buying power in SF would come from about a 55-60k salary in my area. (1)

(1) http://www.wolframalpha.com/input/?i=moving+from+San+Francis...


> massively inflated salaries

Why would you say that's inflated? Engineering is a professional occupation that requires a high degree of skill and education. Much like lawyers, doctors and other forms of professions. Many of the companies that employ software engineers at those salaries are highly profitable. From big names like Google, Oracle, Facebook, Apple and Amazon to smaller companies like FrogCreek, Atlassian, and New Relic. So why would you say they're inflated? Software engineers are employed by companies with real products, that provide real value. Salaries have been at these levels for over a decade now, in fact there was a big dip after 2000 when pay actually was hugely inflated, but within a few years pay rose again to current levels. I'd say when you've been earning a salary long enough at these levels to buy a home and raise a child that it's a stable market rate and not inflated.


They're inflated due to the extremely high cost of living in the areas where these jobs are common: Austin, San Francisco, Seattle, Vancouver. If someone at Google in SF is making $250k, their peer in Ann Arbor, MI (a small city with a Google office) could have the same purchasing power with maybe $80k. If the companies weren't located in SF, the salaries could be a lot lower for the same work, and the employees wouldn't really be making any less even with a lower salary.


Actually, I sort of think the cost of living is high because tech workers make so much money. So you have your cause and effect reversed. They do seem to be the primary reason why rents and housing prices are going up so much in the bay area.

E.g. I've read about landlords who raise their rent because of the presence of a google bus stop nearby.


> [Tech workers] do seem to be the primary reason why rents and housing prices are going up so much in the bay area.

In 1998 (at about the height of the dotcom bubble) a 1-bedroom apartment in my apartment building was renting for ~$500/month. In 2010 (shortly after the real-estate bubble collapse) that same apartment rented out for ~$1500/month.

Does it seem to you that tech worker salaries explain that 3x increase? Remember that as of last year, tech workers made up ~8% of SF's population.


> In 1998 (at about the height of the dotcom bubble) a 1-bedroom apartment in my apartment building was renting for ~$500/month.

That is very low according to my memory. I was in Santa Clara in that time and my rent was over $1000/mo.

Now, I wasn't down in SF with all the cool kids. I do remember that SF in that time frame was right at a gentrification inflection where there were really expensive places right next to buildings that should have been condemned, so I'm willing to concede your point.

> Does it seem to you that tech worker salaries explain that 3x increase? Remember that as of last year, tech workers made up ~8% of SF's population.

Yes. That one is easy. Nobody wants to rent to the 92% combined with highly restricted supply.

Even back in the DotBomb days, there was a reason why people were commuting from the Fresno area. (152 would be bumper to bumper at 4:30AM in the morning!)


I could not imagine a worse fate than commuting from Fresno to the bay daily, let alone doing so in bumper to bumper traffic in the middle of the night.


You forgot 40-60mph wind gusts because those are passes through the mountains.

It was stupid, but that demonstrates how bad things are in the central valley.


In the case of San Francisco it is the combination of high salaries for tech workers and incredibly limited supply.


The marginal cost of living in the Bay Area vs Ann Arbor is maybe 40k/year, most of which is housing. It's reasonable to compare $80k in Ann Arbor to $120k in SF, but $250k is ridiculous.


Inflated implies bubble, deflatable. I think it's just the market rate for living in those places. There's no bubble to pop here. Yes, it's true the cost of living are high in these areas, but I don't think that means the wages are inflated. How would you derive a non-inflated base pay eliminating cost of living?


Inflated implies larger than normal. "Bubble", when it comes to tech, has a negative connotation. SF could totally deflate. Imagine if Google and Apple moved away. Boom, there goes the neighborhood. If those people are gone, the prices go down.


Imagine if all those people abandoned by Google and Apple stayed put and started new companies.


Imagine if all those people were paid by Google and Apple to move.


Yes, I can imagine them then going to have a friendly lunch with their friends working for Facebook and Twitter. Offer letter by end of week, problem solved.


Okay, so salaries in developer hubs aren't inflated. They're correct.

So then developer salaries in other parts of the country that are 40-50% lower due to correspondingly low costs of living on this areas are what, then?


While I don't doubt that they pay more because of the Bay Area, the best developers are worth more money, and I think the best developers tend to flock to the Bay Area for the network effects.


This is patently false. Some things cost a lot more in SF, like rent; most other things are slightly more expensive or the same in price, like a car or a trip to Europe. So really, a $250k salary in SF is like a $225k salary in Michigan, allowing $2k more a month for rent, which I think is generous, but I don't know the market right now.


Is rent tax deductible now? :P

Also remember all your local services are done by employees that have to pay rent too, and businesses have to pay rent (that one actually is tax deductible).


I've lived in the bay area, it was more expensive than say SLC, but not incredibly so. Your food, your clothes, much of what you need to live, is actually sourced outside of the area. Starbucks was cheaper than SLC, for some strange reason I can't really understand. Maybe competition?

These days, I visit Seattle and Spokane a lot. Seattle is of course, a lot more expensive than Spokane, or it is supposed to be. But restaurants are much more expensive in Spokane than Seattle. Heck, across the border in Idaho where minimum wage is a lot less than WA, they are even more expensive than Spokane.

I've lived in Switzerland and now live in China. These places are much more expensive in some goods (like cars, clothes, china is cheap in eating out at least). The differentials are much greater than anything you would encounter in the states.


In New Jersey, at least up to a couple years ago, you could indeed deduct part of your rent (10%?) on the presumption that it had already gone towards property taxes.


$250k/yr doesn't allow for $2k/mo more spending than $225k/yr after factoring in federal taxes.

California state income tax is also more than double that of Michigan's at those income levels.


Fine, then increase that to $4k/mo (so your Michigander is making $200k vs. SF $250k), which doesn't pan out in my experience but we are still far away from $80k.


With major players having coordinated to suppress wages, it's an even more suspect statement.


There's absolutely no reason to go on that kind of rant. We all know that. What the parent post was referring to was only the inflation due to the cost of living in developer hubs like SF.


>Why would you say that's inflated? Engineering is a professional occupation that requires a high degree of skill and education.

That really depends on what kind of code you're writing. If you're gluing together web frameworks then, I'm sorry, but I wouldn't consider that engineering, and it certainly doesn't require a large amount of education and skill.


So you're saying that it's easy to hire talented web developers for low salaries, and the market is wrong?


I think he's saying that the market value for web developers is higher than the complexity of the work would justify. (i.e. their salaries are inflated)

Feel free to disagree with people, but please try to avoid misconstruing their arguments.


@zyx got it. Certainly there are always going to be people who are head and shoulders above the rest, but the context was typical/median salary. I would expect that those truly talented individuals make more. Your average web dev? I would say that 100k is an inflated salary and that it involves very little of what most would consider to be "engineering".


As birdmanjeremy posted below, you have to discount for risk. This year he got $225k, next year he may get nothing.

In any case I think you're underestimating the value of benefits provided by large tech companies. Free food, top-end health insurance, maternity/paternity leave, gym memberships, retirement contributions, bonuses, etc. all add up. I've commonly heard the rule of thumb that total cost-to-employer is roughly $2N for an employee making $N in nominal salary.

The point about location is reasonable: it's harder to get a six-figure salary doing remote work, which is effectively what this is. Still I think people shouldn't be shocked about the amount of money in play here. Given the risk, it's a good payout but not a great one for someone of his (clearly very high) skill level.


The 2x salary average is very misleading. The average for all employees everywhere in the US sure, but it's a sliding scale.

The higher your salary the less your employer is paying as a percentage of your income. There's very little difference in overhead between someone making $100k and someone making $150k. The overhead difference will only be a few thousand dollars more for the person making $150k.


"There's very little difference in overhead between someone making $100k and someone making $150k."

Logically, you would think so, but remember that many benefits scale with salary/seniority. Things like retirement plan matching and vacation accrual rate. Also perks like better offices, parking, "training" in attractive destinations.


> massively inflated salaries

Is it really fair to say developer salaries around the $200k mark are being inflated?

Developers work can have an impact on millions of people, yet we keep expecting all of this work for a $60k salary.

I've found that only developers/programmers do this self flagellation, almost all other professions complain about $200k being too low, not too high, and expect more. (Lawyers, Doctors, and even some electrical/mechanical engineers)

It's just a shame, as I find it hard finding good work as a programmer as my peers fight for lower and lower salaries.

I had a naive hope that this industry would mature and fight for higher salaries, but whenever I see programmers getting paid half that of a Lawyer or Doctor, self flagellation start to come into play.

Such as shame for an industry with the most potential to change the world.


My problem with the Wolfram/Alpha numbers is they aren't realistic. I'm looking at moving to Chicago or Atlanta or somewhere back on the east coast from Seattle. And that Wolfram page shows that a 2 bed room apartment (not house) in Atlanta is $1012. Where? Duluth? Buford? Snellville? Or to the South? If you compare like neighborhoods (Belltown in Seattle to Midtown of Atlanta), rents are usually very close to each other (within a $100 from what I've found). But recruiters are fixated on the "It's so much cheaper".


2 bedroom apartment in Atlanta is ridiculously easy to find, especially if you live ITP like 95% of all Atlantans. (ATL Metro population: ~5 million. ATL City population: ~500k).

I rented a 1700sq ft, 2 bed room, 2.5 bathroom, 2 story townhouse a 25 minute drive from Downtown Atlanta for $850/mo, and it had some luxury and I could have easily found lower prices than that moving a little further out (longer commute) or lowering the size/luxury.


You forgot to multiply by the probability of winning. If you have a 1 in 3 chance of winning, the expected payout is only $75k per attempt.


>the money is actually relatively low compared to standard tech benefit packages

225K is low compared to the standard? What standard are you talking about?


It's risk vs reward. If he had a 50% chance of success, and spent a year on the exploits, you could equate it to 112.5k salary+benefits package (assuming it was a full time job to find the exploits). I would call 112.5 low for the tech industry, especially for someone of his caliber.

Either way, very impressive and well deserved.


I doubt he got one year or any deadlines in his mind. Most of the hackers do this because they know that there's an exploit somewhere.

Come on, how many bugs do you produce, while writing code? Do you bet 500$ that you are bug-free? Do you bet 500$ that the libraries that you are using are bugs-free? I don't.

In this sense I think guys like him are having a way of thinking more similar to a gamer : "I need to do this and I come to the next level, but the experience playing is more important than the next level!"

So... I doubt he thinks there's a 50% chance. He's just too good "gamer" and he knows the games so good that he actually won the four different versions of it.


> I would call 112.5 low for the tech industry

Depends on location.


Lee is clearly extremely talented. He wouldn't be making 'standard' salaries, he'd be getting paid extremely well. And, I imagine he'll be getting some pretty impressive job offers now...


I think he is talking about the expected value vs a lucky chance.

It's like the stock market. In the long run there are no miracles.


Skill definitely plays a role too. If you are smart and experienced enough, you can theoretically churn out a valuable exploit once every 1-2 months on average. At a company, you're generally going to get a flat salary.

Many vuln research/development companies do offer bonuses for every effective exploit you write though, in which case it probably is better to work for such a company than to rely solely on bug bounties and competitions for income. Unfortunately, those same companies usually sell the exploits to the NSA and the intelligence agencies of other governments.


>avoids the urge to go to the black market (or NSA, etc).

You can still sell your exploit to the black(site) market and later collect a bounty on it. You take some risk that someone else finds it or the party you sold it to leaks it.

Price accordingly.


> You can still sell your exploit to the black(site) market and later collect a bounty on it.

Sounds like a good way to make dangerous enemies.


But if they are able to find it, they are probably earning enough month-to-month on various bug bounties. Heck, most of them would probably already running or hired by some of the top security research groups.


>the black market (or NSA, etc)

Legitimate question: What’s the difference?


> To finish it off Google’s Project Zero, as it usually does when Chrome is hacked at the event, paid Lee an extra $10,000.

So they don't actually intend to hire this guy?!


What you think, how much money these guys earn per year with knowledge they had?? I think this money is only fractal of what they earn in a year.


> Lee was able to take down both stable and beta versions of Chrome by exploiting a buffer overflow race condition in the browser. He then used an info leak and race condition in two Windows kernel drivers to secure SYSTEM access.

I'm trying to fully understand. Does this mean that by maliciously crafting a website, someone can get SYSTEM access on a windows machine just by getting someone to visit that site in Chrome?

And what does SYSTEM access mean? Is that user-level privileges or admin privileges?


That's correct. The best paying exploits at Pwn2Own are exploits where just browsing to a website gives an attacker root code execution on the machine.

These types of exploits pay well because they are extremely difficult to pull off, as they require multiple exploits to break through the browser security, out of the sandbox, and through the OS protections.


And yet every year they have been found. It's safe to assume browsing to a site can take over your entire machine at any time. Enjoy the web guys!


On the slightly less dark side, we can probably assume that this class of exploits is so valuable that nobody would use it against a large number of people at once in an ordinary way, like sending out spam emails or comments with links to build a botnet. That would likely result in it getting noticed and patched quickly, for very little return on investment.

These types of exploits are probably only used by very well financed organizations in carefully targeted ways, so as to justify the huge cost of finding them and keep them unnoticed and unpatched for as long as possible. See the recent stories on the Equation group-associated malware. You and I are probably safe enough, but somebody like Edward Snowden better be very careful where he surfs.


I've never thought about surfing in a VM before, but it seems that if you're ultra paranoid that it would be best to do so.

Starting to think that running apps within a container is the right way to go.


Or on a completely separate machine: Exploits have been found for visualization software. Of course, then you need to be concerned about the network and so on.

It's turtles all the way down.


IMHO, the bigger concern is the exploits that the wider community hasn't found yet. How many nasty tricks were out there in Stuxnet for years before anybody noticed? Just given the scale of the stuff that's been found, I'm inclined to think that trying to out-secure organizations with that kind of resources is a losing battle.

You're better of not being somebody anyone would invest that kind of effort to hack. Or at least making your traffic not identifiable as somebody that your opponent would want to hack.

Last I checked, holders of large amounts of Bitcoin tend to avoid letting their identity be associated with their accounts easily, and that's probably why. I bet somebody would be willing to risk a few $100k exploits to get a few $million worth of BTC.


That's why I have a separate machine which I use just for browsing the Web, which I keep disconnected from the Internet in a sealed steel box.

...

Unplugged from the mains.


Qubes tries to build the OS that way.

https://www.qubes-os.org/


> Starting to think that running apps within a container is the right way to go.

If you're talking about something like Linux containers, note that the Chrome Pwn2Own exploit involved a sandbox escape via a kernel exploit. (Though seccomp-BPF mitigates this to some extent on Linux.)


See Sandboxie


And that is why you browse with a virtual machine which is re-initialized from the CD copy every time you restart :-)


.. running inside another virtual machine just to be secure from guest-to-host escape vulnerabilities.

Repeat as many times as necessary based on your paranoia levels. Inception.


So for a long time I had a spare Vax (MicroVAX III if you're wondering) hooked up to a modem on an unused phone line (yes it answered as KREMVAX), and when sufficient internet bandwidth came along put it on the Internet to watch people "hack" it. It was fun because they would get all confused when their x86 exploit code wasn't even a thing :-).

But your comment reminded me that if I ran a virtual VAX and then a virtual Windows on top of that and a browser in the windows, breaking out of the 'guest' into VMS would really challenge the bad guys tool box in terms of zero days :-) Fun to contemplate on a Friday afternoon.


And nesting of virtual machines is still useless if you have bugs in CPU like this (2012):

http://www.kb.cert.org/vuls/id/649219


And then you add a hardware emulator!


That only applied to Xen's paravirtualization mode (runnning a modified guest OS as an unprivileged process), not the hardware-assisted virtualization.


VMCeption


Not exactly - the economics are important.

Anyone who builds an exploit like this is not going to waste it. It's either going to be used in targeted attacks, or reserved for pwn2own. They're not going to burn it by leaving it on some hacked site to get used on your grandmother.

Those types of attacks happen after the exploit is already known and has patches available. The good stuff trickles down, and the early days of active exploitation are more or less restricted to those with the deep pockets and legal impunity.


True it is a scary web but how many of those exploits have relied on default settings and Javascript to run?

You are many times safer on the web if you have the discipline to use noscript properly.


> True it is a scary web but how many of those exploits have relied on default settings and Javascript to run?

Layout (just to name one non-JS subsystem) has been responsible for a lot of vulnerabilities too.


True, but from experience it is often the case that exploits are obfuscated by being wrapped in one or more layers of JS, so that they are less likely to be discovered by passive scanners.

This will probably seem quite shocking, but I almost exclusively used IE6 for a few years, with JS disabled - and despite frequently visiting the "darkest corners" of the Internet, was never exploited. On the sites that tried, I'd just see a blank page; view the source, and there was a blob of obfuscated JS.


And if the person sitting next to you in the same organization doesn't do that, or any person doesn't do that, if they get compromised then their box could attack yours with a different kind of payload.


Fun fact: Noscript loads and parses all javascript and then just stops it from running against the live DOM. Decreases page render time, sure. Prevents exploits? Don't think so.


NoScript won't actually execute any of the Javascript, though. I am not aware of any historical vulnerabilities from the mere act of loading and parsing Javascript, though they're certainly theoretically possible. It's much easier to secure a parser than a runtime.


I can't recall a single exploitable bug in these competitions that attacked the javascript parser. Running the js is a far larger surface area to attack.


That does not affect security significantly. If it did then it would totally defeat the purpose of noscript existing.


Any yet every time JavaScript blocking extensions are mentioned, it gets ridiculed.

edit: case in point, someone posted about it right before and it's downvoted. Good job!


Welcome to Hacker News in 2015, where the community mindset has changed from the old school Slashdot's reverence for software freedom and ideology to modern Reddit's mantra of "Who cares? It works! Get real!"


1) >implying the mindset on HN hasn't always been "push things and break shit who cares"

2) /r/programming and other software development subreddits are actually closer to slashdot's ideologies (excepting /r/webdev where performance is an overrated thing)

The good news is that HN seems to be slowly going back to a more old school mindset. At least you can criticize javascript.framework.of.the.day.js without being showered in downvotes, as opposed to a year ago.


The regrettable truth is that this pushes us all toward a more centralized web, as it's much less likely that google.com or facebook.com is going to pwn your browser.

The real nightmare scenario is when developer machines get hacked and attackers get access to source, but also to build servers where they can inject binaries into the web deployment stream, which themselves exploit end-user browsers. This is, I think, a very good reason to not give devs any access to build/CI servers. (Ironically, actual endpoint servers are slightly less critical to secure from devs.)


The few times I've caught on-load viruses was when visiting big sites. Turns out their ad networks often get compromised and suddenly reddit or facebook are loading exploits.

Virtualizing and sandboxing the browser in a secure way is the only way out of this. On top of an eventual move to memory safe languages.


That's a damn good point. It also means that ad networks and CDNs are the best targets.


The lovely choice:

1. facebook where the NSA are checking on you

2. faceskibook where a Russian hacker is going to own your pc


Most (not all, but most) of these exploits cannot run if you use NoScript/NotScripts or equivalent.


And not a single modern website will run either. If you're willing to disable javascript in 2015, you might as well not use Internet at all.


Yes, which is why NoScript allows fine-grained controls. It's not a straight up disabling of Javascript.

You whitelist the sites you trust. Then if any of those sites get hacked, there will likely be a line added like <iframe src="http://evil.com/exploit.html">, or <script src="http://evil.com/exploit.js"></script>. NoScript will always prevent those since whitelists are domain-specific. So unless the malicious Javascript is added 100% inline on the compromised site (which is easy to do, but is done less often for a variety of reasons), you're going to be safe.

And, obviously, you do not whitelist random sites you click off of Google searches or from emails without validation.


It definitely improves security, but you'll still be screwed against a targeted attack against a specific website.

The attack against Forbes.com is a good example. That is a site you would have whitelisted and the attackers compromised that website to then elevate access to the people browsing that site (government employees specifically targeted).

http://www.forbes.com/sites/thomasbrewster/2015/02/10/forbes...


I work in incident response and actually investigated that watering hole firsthand at our organization. The SWF was hosted on a separate domain, so NoScript in fact would have prevented that specific attack from working even if you whitelisted forbes.com.

NoScript will very often block exploit chains from occurring even if it won't necessarily always block stage 1.


So, most sites won't work, and the ones that do work will still be vulnerable? Awesome.


They're vulnerable if you whitelist them and they get hacked and exploit code is sent directly in the HTTP response. Just like how HTTPS (and HSTS and certificate pinning) does nothing for you if the server is compromised.

If you're careful about what you whitelist, NoScript adds a lot of additional security.


For me, the big win of things like NoScript is that I can block the zillion websites OTHER than the one I'm actually trying to view. This has the upside of making my web browsing amazingly fast because it stops the 50 DNS queries that every page makes.

Forbes.com is mentioned, here's what else is on that web page after enabling forbes.com:

Blacklisted: doubleclick.net, optimizely.com, bluekai.com, scorecardresearch.com, googletagmanager.com

Other: forbesimg.com, gigya.com, sail-horizon.com, amazon-adsystem.com, media.net, garble.cloudfront.net, chartbeat.com, mediavoice.com, .............

Do YOU trust that ALL of those sites got their Javascript secure? BWHAHAHAHAHAHAHAHAHAHA!

And, let's not even start with all the people who run "Blogoblather 0.4.5" (it will never reach 1.0) who could just post a static web page. No, I don't want to discuss this, and no I'm not logging in. Oh, and, gee, if it were a static web page, if you do get featured on Reddit, YOUR SERVER WON'T FALL OVER AND DIE accessing a database that doesn't have any comments other than spam anyway.


> And not a single modern website will run either.

I find that the vast majority of websites run perfectly fine with NoScript. What exactly qualifies as a modern website?

Even if they do "require" JS, you can enable one or two of the scripts that are actually necessary and keep the 22 ad/tracking/various-other-third-party scripts disabled to minimize the attack surface.


Then perhaps the problem is with "modern" sites superfluously using JS. The majority of sites that I've come across from casual browsing (usually via Google search) don't need it to display the content I'm looking for, and when I do find one that does and Google's cached version doesn't contain what I'm looking for, it's almost certain that I'll just move on to the next search result.


Another trick to try: If you use Firefox, then often you can see the contents of a site by View -> Page Style -> No Style.

Other than that, I agree with you. I never enable JS for a random site, I also "move on to the next search result".


I have JS turned off due to paranoia. And surprisingly, most web sites Do work without it! I also do not miss all the ads and other annoying stuff.

Even budget servers now a day can handle thousands of requests per second, so there is no benefit in offloading "server stuff" to the browser client.

I do have to add some sites to the white list though.


Enough of the internet works for me with `javascript.enabled` set to false.


That seems pretty hyperbolic. There are plenty of reasons to not use javascript.


Perhaps HN isn't "modern" but it runs great w/o Javascript.


I run NoScript. Most websites are just fine.


That's a bit of a defeatist attitude. I prefer to believe that bad decisions about our civilization's communications infrastructure can be reversed.


Saying that using NoScript means disabling Javascript wholesale is like saying using SELinux or AppArmor means disabling filesystem access wholesale.

It's a disingenuous and silly argument.


[flagged]


> Please take a long walk off of a short pier.

Please don't post comments like this to Hacker News.

https://news.ycombinator.com/newsguidelines.html

https://news.ycombinator.com/newswelcome.html


Nonsense. Even PNG images have been used for attacking browsers.

Code is data, data is code. Both are interpreted, and both have exploitable flaws.


This particular exploit relies on Native Client, which is concerning since it is on by default and is hard to disable.


Hard to disable? Just open chrome://flags and click Disable under Native Client.


And how many people know of Native Client, let alone about chrome://flags? Not to mention that advice won't work then not using a persistent Chrome profile. And Google removed "--disable-plugins" so it is not even possible to automate disabling these plugins short of recompiling Chromium.


You have some point, but how many people even know what Native Client is? Or even JavaScript?


As a sysadmin, I find these exploits deeply disturbing. It used to be that people actually needed to download a file and, like, execute/open it for their systems to become infected.

Recently, my users have been hit by fake DHL/UPS "track your shipment" mails, and some of them were sufficiently un-paranoid to click on the links provided by the mails, which apparently just showed a generic-looking 404-page.

Which might mean that the offending page had already been taken down. But on the other hand, with these exploits, that can already be enough to infect an unsuspecting visitor.

Well, one more reason to use Dillo, I guess.


It's never been the case that people actually needed to download a file and open it to get infected. Actually, in some ways things have improved - it used to be a lot easier to find this kind of exploit, which is why there weren't rewards like this for finding one.


SYSTEM is roughly equivalent to root on linux.

My interpretation of the article is that he managed to escalate far enough to get into the kernel, allowing him to to modify some data structure to give a user land process higher privileges.

So, the overall process could be something like Chrome escape => windows driver vuln => full control of the system.


SYSTEM is admin privilege internal account, IIRC. It's a user on windows - http://i.imgur.com/MkP9pIm.png


> someone can get SYSTEM access on a windows machine just by getting someone to visit that site in Chrome?

Or, I believe, through any of the other Windows browsers. There was nothing Chrome specific about getting SYSTEM access.

All browsers were pwned, and Lee used kernel driver bugs to elevate privileges. That exploit could have been used in tandem with any of the browser exploits on Windows. Once you can run arbitrary code...


IIRC on Windows, SYSTEM is an account something less than full Administrator but more privileged than a normal user. I think many services run as SYSTEM.


SYSTEM is root.

SYSTEM is arguably not a user, because a user has things like a password, groups, and other user properties. SYSTEM on the other hand is a built in low level security context which cannot be removed, altered, or shared across a network. You cannot login as SYSTEM, but processes can run as SYSTEM. It has a static SID on all Windows systems as S-1-5.

Some of the processes that run as SYSTEM in SYSTEM: ntoskml (Windows Kernel), all drivers, and all interrupt handlers. A few key services (not all, and fewer every release).

Literally it doesn't get higher than NT AUTHORITY\SYSTEM on Windows. The administrator account no longer exists on most Windows installations at all, and the administrators group is just that, a group.

It is like root on UNIX, if the root account had been configured to disable interactive login and to allow sudo only.

PS - SYSTEM is sometimes called NT AUTHORITY in documentation. I've seen it called both, I've also seen it called "NT AUTHORITY\SYSTEM." I believe the NT AUTHORITY is referring to the lowest level of permissions within the OS, and SYSTEM is at that level/root of the permissions table. So really NT AUTHORITY and SYSTEM are equivalent.


It is like root on UNIX, if the root account had been configured to disable interactive login and to allow sudo only.

Which is what Ubuntu actually does :)

https://help.ubuntu.com/community/RootSudo


Small nit:

S-1-5-18 is the SID+RID for "NT AUTHORITY\SYSTEM". S-1-5 is just the "NT AUTHORITY" prefix.


You cannot login as SYSTEM, but processes can run as SYSTEM

There's a trick to do this, I only remember it well for XP (via task scheduler/at command) but it's possible for the newer versions too.

In Windows, Administrator is not the true root.


Use psexec: https://technet.microsoft.com/en-us/sysinternals/bb897553.as...

It's occasionally useful to run regedit as SYSTEM.


That's not logging in as SYSTEM, that's running an explorer process as SYSTEM.

Another way to accomplish it in XP & such was by copying cmd.exe over one of the default screensavers (which are also just regular programs). When you're at the login screen the screensaver gets run as system, thus giving you the equivalent of a root shell. Launching explorer from it would result in a new user profile being created in explorer and you'd be "logged in" as SYSTEM, but not really.


When I configure a service that runs as SYSTEM user, I cannot perform administrative calls on that service. For example, a powershell script fails with "access denied" when it has an operation that requires admin rights. So I think, running as SYSTEM does not make you root all the time? Or am I missing something?


Another small nit: Every Windows installation has two built-in accounts that can't be removed (but can be renamed!): Administrator and Guest. It's just that they're both disabled by default (since Vista I believe) and there's no easy GUI to enable Administrator while there is for Guest.


Thanks for this explanation! :)


Nope, it is MORE privileged than full Administrator!


You might think so, but check the service configuration (services.msc) - on a typical Windows machine, you will find plenty of services running as SYSTEM. Basically, nearly all services that do not need to face the network (Windows has another builtin account for that, which I think is called NETWORK or NETWORK SERVICE on English versions of Windows).


> He told Childs via translator that not only was it was his first time writing Native Client code but it was his first time dealing with a kernel exploit.

Well, I guess Lee has found a new lucrative hobby for rainy weekends.

More seriously, how can someone possibly own three major browser in two days and on a first try at this kind of sport ? A pretty loud way to shout "Hello World"...


These exploits are all created well in advance. People hold them for the whole year just to use them here. They're usually made by teams, with one person chosen to run it at the event.

At the event, each person brings their exploit and the browser is run against it.

Somehow this gets changed to "browser hacked in seconds!" because the media is great like that.

That said, it is quite impressive for a newcomer to clean house like this, even if he did have a year to prepare, assuming he wasn't working with a team.


"People hold them for the whole year just to use them here"

But does this mean that they will leave the vulnerability alive for a year, half-year (or whenever before the conference they found it) by not reporting it to the vendors till the conference? Because from the description it looks like it has to work on the latest versions of the browser (for example Chrome 42).


There is another perverse incentive. It has been suggested that in previous years the browser vendors were sitting on fixes and waited till the week before pwn2own to release them.


Who suggested that and what reason do they have to believe it?


Correct. On the other hand, if this contest did not exist they may not be looking in the first place.


This was one of Google's stated motivations for recently changing Pwnium from an in-person event similar to Pwn2Own (and held at the same conference 3/4 times, IIRC) to a more traditional year-round bug bounty.

Me, I'm going to miss the experience of sitting at the little dinky hotel cafe with bad Wi-Fi and frantically trying to finish up the exploit before the contest ends. And the press coverage was a bonus...


Of course, with $225,000 on the line, who on earth wouldn't leave the vulnerability alive.


I am a friend of a friend of lokihardt, and yes it was a solo effort. Very impressive.


I don't think it's supposed to be read as being hacked in two days. I'm sure he's worked on them before attending the conference, but only presented them on the second day due to the format of the conference.


    Microsoft Windows: 5 bugs
    Microsoft IE 11: 4 bugs
    Mozilla Firefox: 3 bugs
    Adobe Reader: 3 bugs
    Adobe Flash: 3 bugs
    Apple Safari: 2 bugs
    Google Chrome: 1 bug
    $442,500 paid out to researchers
Looking at these figures, am I the only one who think competitions are not the most effective method for finding bugs? Of course, 21 bugs is not small and $442,500 is not much, but when you think those researches spend months of their time for finding those bugs, wouldn't it be more appropriate to use that money on proper security audits? (I know it will cost more but would be more effective overall?)


It's not 21 bugs. Anyone can find 21 bugs. It's 21 bugs that enable/facilitate Remote Code Execution. These are some of the worse of the worse. $442,000 is a steal to get these identified and fixed.


Additionally those 21 bugs are probably worth way more than a measly $442k if sold on the blackmarket or to the NSA, etc.


Are there actually reliably numbers/analyses on that? I heard the sentiment many times, but I have seen no proof of it.


Finding bugs is easy, exploiting them is a magnitude harder. This competition is about exploitation.


I consider the purpose of competitions like this to be as much about finding bugs as the purpose of running a marathon is to get people from point A to point B. That is competitions like this are mainly about public relations in the form of what is intended to be enjoyable community involvement, finding bugs is a pleasant side effect, like the transportation aspect of a marathon.


Proper audits don't get much publicity.


Do people go into Pwn2Own knowing the exploits they will use in advance? I'm not sure I understand the format of such a contest.


Was similarly curious. Reading the rules here: http://zerodayinitiative.com/Pwn2Own2015Rules.html

Major points:

- You register for which browser + os combination[0]. Then they randomly order the contestants.

- When you are called, you have 30 minutes.

- The user browses to a particular piece of content that you specify. Then no further user interaction is allowed (like clicking a dialog, downloading a file). [1][2]

- The prize money goes to the first successful exploit. Money differs by browser.

[0] Chrome, Firefox, IE, Adobe Reader in IE, Adobe Flash in IE. Safari on OSX. Fully patched OS.

[1] How does one get to specify the content? What if I have a http header that downloads a file?

[2] I remember back in the day, they used to have a fully no interactive version? Like the user was just on the same wireless network?


The exploits that take down hardened browsers take months to develop. The time limits and race dynamics are theater.


>"The time limits and race dynamics are theater."

That makes more sense. Otherwise, this is movie-script-like hacking ability.


"I just need to break the encryption......ok...it's done."


It makes a bit more sense in the original context of the contest, where you were doing the attack on the actual hardware you would win. So time limits were so that everyone got a shot, and a race since there was no prize after someone won it.


I was curious, so here are the rewards:

Windows-based targets:

1. Google Chrome (64-bit): $75,000 (USD)

2. Microsoft Internet Explorer 11 (64-bit with EPM-enabled): $65,000 (USD)

3. Mozilla Firefox: $30,000 (USD)

4. Adobe Reader running in Internet Explorer 11 (64-bit with EPM-enabled): $60,000 (USD)

5. Adobe Flash (64-bit) running in Internet Explorer 11 (64-bit with EPM-enabled): $60,000 (USD)

Mac OS X-based targets:

1. Apple Safari (64-bit): $50,000 (USD)


So if I'm reading this right, you could spend months developing an exploit, and then not even get to use it if another contestant, randomly chosen to go before you, exploits the browser first?


At that point you would probably submit it through the regular bug bounty program. Pwn2own is just more lucrative.


Yes, I believe so. Though there are other competitions, and also the regular bug bounty programs that most browsers offer.


I am surprised that Chrome OS + Chrome isn't an option. If it provided uncrackable, that would be good theater; and if not, still very useful to Google. Unless the possibility of tarnishing its reputation outweighs the benefits...


Google used to run a separate contest (at the same conference as Pwn2Own) called Pwnium for Chrome OS, but starting this year they are moving to a model where they pay similarly large bounties year-round rather than at contests.

http://googleonlinesecurity.blogspot.com/2015/02/pwnium-v-ne...


I was surprised by how few OS/Browser combinations were tested. For instance, I'd think that the OSX + Chrome population is pretty big and I'd (selfishly) like that combination tested.

It feels like, just the fact that this competition and other bug bounty programs exist, means that the big companies here have gotten over reputation tarnish and know that the patch is worth it.


Any browser that can be exploited on Windows can be exploited easier on OS X. It lags far behind in mitigations. So, the Chrome/OS X combo is not done because it's redundant.


Windows has a huge amount of ridiculous attack surface like GUI rendering code in THE KERNEL: http://www.theregister.co.uk/2015/02/12/hacker_kicks_one_bit...

I'm pretty sure no system ever is going to be as exploitable as Windows.


So you only have to visit a website and do nothing else? And probably assuming javascript is enabled.


I think you instruct the judges to visit your website, and then demonstrate that you owned their box within the 30 minutes to win the prize.


Yes, weeks or months of work goes into finding the bugs, and then working out how to exploit them. The continued "hacked in x minutes" crap is nonsense.


It's basically demo day for browser (and other software) vulnerabilities. Hackers show off their exploits and vendors pay them cash for disclosing them at Pwn2Own instead of selling them on the blackmarket.


They used to sell the hacker, but since they've stopped Pwn2Own is much more popular.


Surely there are 13th amendment concerns with selling the hacker.

Do you mean that they used to sell the exploits to other hackers at Pwn2Own?


Yes. It's basically a bug-bounty system, but the bounty includes more prestige :)


And more money.


And leaving the exploits in the wild for weeks/months beforehand.


To mitigate this problem, Google has recently announced that they are running a similar program year-round:

http://blog.chromium.org/2015/02/pwnium-v-never-ending-pwniu...


With very lower rewards, the exploit that nettet $110k would probably be getting less than $50k


But you also minimize the risk that someone else will find and report the vulnerability before you do, negating any & all chance of earning a bounty yourself.


This is a valid criticism.


Perhaps more worrying, they may have sold them elsewhere first and informed the purchaser that they have until the competition date to use it. Better business if you have to come back after the disclosure date for more holes...

Nohig ethical at all going on here.


My understanding is that would violate the rules of the competition. Though of course if a competitor were to do that, odds are no one would find out.


Would someone willing to sell exploits on the black market really be concerned with "violating the rules of the competition"?


Well, presumably they'd want to keep the money from the competition. That's definitely an incentive.


I would imagine it'd be fairly hard to the competition to know and revoke the money. Bad guys are almost certainly exploiting various undisclosed vulnerabilities. The bigger issue, I'd assume, is that by doing this competition, you've now killed that bug you previously sold. So long that was disclosed to the buyer though, I imagine you're all set.


Odds are the black market guys find out. I don't know how the market works, but I'd be pretty pissed if I had bought a hole just for it to be patched soon after.


I was wondering the same thing. It seems like there would be no way to stop someone from having exploits beforehand. And a single person finding exploits in every major browser in one day seems unlikely.


They have to exploit the fully patched versions (on the day). Prior to the contest, vendors are free to patch the exploits, if found.


Any insights into how researchers find these bugs? I'm particularly interested in the kernel timing bug - how did the researcher know that he would find the kind of exploit he needed? It seems like he's a fast learner, going from no native client code to a big exploit...


People who exploit browser bugs do it as a profession, or as a passionate consuming hobby. They're generally connecting the dots between a series of lesser flaws. They spend weeks and months banking those lesser flaws, and the patterns that create them. Some of them can come from static analyzers, some can come from theorem provers, some are just lucky finds.

For awhile, a lot of them came from fuzzers; if you're interested in how that works, strong recommend for Zalewski's blogs on his open-source "american fuzzy lop" fuzzer.

Browsers are huge attack surfaces, as complex as operating systems, with different modules of wildly different quality. There are lot of bugs to find.

The process makes more sense when you grok how much effort goes into getting privileged RCE from a hardened browser. Nobody can sit down with that code and pull one of them out of thin air on the spot.


Why is it called american fuzzy lop?


AFL is a breed of pet rabbit that is fuzzy (fluffy).

The software is a fuzzer.

https://en.wikipedia.org/wiki/American_Fuzzy_Lop

https://en.wikipedia.org/wiki/Fuzz_testing


I found CVE-2010-0539 (Safari remote code execution) by setting up a website that just fuzzed a ton of stuff. It had a meta-refresh setup so the browser would just keep reloading with a new random fuzz payload. I was in my 2nd year of a CS undergrad program, so I would do stuff like this when I was bored.

I think much of it is just a bit of luck that you're looking in the right place.


What's funny, is back in the mid-late 90's, I found a lot of bugs in Netscape Navigator simply by running the browser in a more secure OS (Windows NT) vs. Windows 9x... There were a lot of conditions where the browser would crash in NT, but in 9x meant you had an exploit.

Many of those were pretty simplistic... fortunately for almost everyone at the time, the browser wasn't a widely used method of exploit, and to my knowledge compromising distributed computer networks via such compromises and ad networks wasn't thought of either. Though by 2002/2003, I had started blocking Flash, Java and Adobe Reader at home when I saw what could be done in the browser.


Can I ask what exactly you where watching for with your fuzzer? Where you just waiting for Safari to crash? Then what?


I've seen a couple of hackers back in the old days. Of course they've probably moved to some more sophisticated techniques, but I doubt they are wearing suits and doing this as a professionals.

More concrete answers of your questions guessing :

1. By brute forcing with popular techniques for buffers overflows ;

2. By tracking what the browser is doing via debuggers and trying to match a kernel bug with it ;

3. By reading a lot about latest exploits on the internet and connecting the dots.

What I think nobody from them is doing :

1. Analysing open-source code ( basically try to find the exploit by reading the code ) ;

2. Studying 5 years CS before doing this


@GP

4. By using various forms static/dynamic analysis (taint analysis, fault propagation, fault injection).

5. By fuzz-testing the software.

Taint analysis is looking at which registers/memory locations you have access to. You inject data into a process and then check where in memory that data shows up, which execution branches are taken, which registers are used with the data, what the stack does, etc.

Fault propagation looks at where errors and faults are handled and how they are handled. The idea is to get a good idea of how a process manages errors and then find an error that is not handled (correctly).

Fault injection is injecting errors into a process and then watch the fireworks. Think of this as the brute-force approach to fault propagation analysis.

Fuzz testing is bombarding the process with semi-valid/semi-random input and watching what happens. There are many tools with this. Example output for browser-fuzzing could look like this:

<html>>style="\>>overflow:none"></body><html>

Obviously none of today's browsers would be exploited by that, but with mutli-threaded rendering, memory management, javascript reading/writing html and all that stuff going on at the same time it's not surprising that fuzz testing can turn up lots of errors (though not all exploitable).


People certainly do find security vulnerabilities by reading source code.


The nasty Drupal bug from last October was discovered when a (name withheld) company in Europe paid for a professional security audit of Drupal's open-source code. Thankfully when they found the bug, they contributed the patch back to the community.


IIRC the Google researcher discovered the Heartbleed bug when he was auditing the OpenSSL code


The Codenomicon team found it using a fuzzer. My recollection is that the Google researcher also used a fuzzer, but I can't find any evidence of that in a brief search.


> Using an out-of-bounds read/write vulnerability he claims he found through static analysis, ilxu1a’s attack led to medium-integrity code execution in what ZDI called “sub-seconds,” earning $15,000.

Looks like some code analysis tools helped for one researcher.


Saying that all browsers fall in less than 2 days at Pwn2Own is like saying a mathematician proves a long standing theorem in 2 days of a conference.


Yeah, like at today's provathon Grigori Perelman proved Poincaré conjecture in under 30 minutes.


We get that all browsers were pwned, but what OS where they running? For example, we can see on that post that there were three bugs in Firefox but we don`t know in what system those bugs would allow escalation of privileges and arbitrary code execution. Or is there a bug allowing arbitrary code execution in Windows, Linux, *BSD, Macs at the same time?!


All were running on Windows, except Safari which was running on OSX. You can see the details at this overly long URL from the contest organizers:

http://h30499.www3.hp.com/t5/HP-Security-Research-Blog/Pwn2O...


Here is my chance! Anyone know why hp.com urls are still, in 2015, so full of subdomains?


Because their web admins have not figured out how to do transparent load balancing.


Why?


Thanks a lot for the link.

I wonder how the score would go if linux and bsd targets were present and if they allowed chrome and firefox on mac os x as well...


Here's the first lecture among many from FSU on Offensive Security, it looks like they are going pretty indepth, I started watching first few recently. Very relevant for those who want to explore how exploitation is done.

https://www.youtube.com/watch?v=lk3rp53b2NA


this was a very good overview, I appreciate the link. May be watching through much more of this class.


If one read all those headlines and news without proper critique, they would eventually start believing in all kinds of magic, like that one dude can discover race condition exploits in the browser and the drivers in under 30 minutes.


Are regular bug bounty programs simply not paying enough whereby researchers decide to hold onto exploits for high paying events like Pwn2Own? How much would these kind of exploits go for on the black market?


Pwn2Own has a lottery dynamic that the bug bounty programs don't. As for the sub rosa pricing for these exploits, it gets complicated. Among other important details: to maximize your returns you need to be a competent salesperson (you need to have the contacts to sell them to), and the best returns make you at least morally and in some cases even legally culpable for what's done with them.

There's probably some headroom left for escalating valuations for browser RCEs, but they're not like an order of magnitude mispriced.

Also remember you're looking at the subset of bugs with the absolute peak valuation.

(I'm both ideologically opposed to bug sales and not smart enough to get RCE on Chrome, so: take this with a grain of salt).


Anyone else find that this comments page crashes their tab in Chrome for Mac? Could be the strange string submitted by frenchtouch206?

Yup: it's a known crashing string: http://venturebeat.com/2015/03/20/these-13-characters-will-c...


Bug report closed as a duplicate of a non-public (i.e. security) bug. That's not a great sign.


Why is that not a great sign. All security bugs are non-public, even on Firefox, until they are fixed. The fact that they are non-public is arguably a sign someone looked at the bug and marked it a security issue which also means it's likely being actively worked on.

Keeping the public just means if you want own people's machines you could scan the bug database for bugs marked as "security" and exploit to your heart's content until they were fixed.


This is off the topic , but my problem with chrome is it become a OS itself , I don't need another OS on top of my OS.I just need a browser, Chrome was excellent one before they start pushing it to OS/framework level software.


I'm on the fence with this one.

It does make for a massive attack surface with the massive amount of complexity a modern browser brings to the table, however it's also giving us a truly Operating System agnostic world. (I think? Javascript based applications are OS agnostic right?)

The browser may become a universal operating system, where web applications become the norm and it brings an end to the Linux vs Windows vs Mac wars.

Who knows?


Pwn2Own 2015 Is On! -- https://www.youtube.com/watch?v=6IKeUHpUR7g

Pwn2Own 2015: Day 1 Highlights -- https://www.youtube.com/watch?v=X2Ssw2sLUHI

Pwn2Own 2015: Day 2 Highlights -- https://www.youtube.com/watch?v=V99skqmTyiY


> He told Childs via translator that not only was it was his first time writing Native Client code but it was his first time dealing with a kernel exploit.

To me this just confirms that no matter what the proponents say, Google native client is just another form of active x, just with a Google badge this time.

Letting a website run native code on your machine is a terrible decision, and completely disrespectful to your users.

Needles to say, I'm not going to switch to chrome anytime soon.


Well, it means you have to break out of the nacl sandbox instead of the V8 sandbox. Neither allow you to run arbitrary code and I'm not sure if there's any particular difference in hardness between them.

That said - Native Client doesn't seem to be used for anything on the web. It might have been useful if other browsers had adopted it, but with Mozilla pushing asm.js I wonder at what point they decide to pull the plug on this ...


My takeaway is that flash, by itself, doubles your vulnerability and acrobat triples it. Obviously these results alone aren't statistically significant but flash and acrobat have been dire sources of vulnerabilities for over a decade (and bear in mind that safari renders pdf natively).


Safari uses its own PDF renderer, wondering if the Acrobat bugs really apply for that?


> Hacked Chrome browser in 2 minutes

Does that mean it took 2 minutes for script to finish it's job?


So, I wonder how lynx, links, elinks, w3m and emacs-w3m fare at this sort of thing…


I think it's reasonable to consider that we should stop adding new features to the web until it's at a point where we could have at least one www that doesn't leave users vulnerable.

Literally every browser is unsafe, and has always been unsafe. There's zero reason this should be the case. Transferring documents online is not that hard a problem. Perhaps blinging out all the latest features for all the latest ad companies is not worth the cost to users' system security.


Add operating systems, servers, and languages to your list for completeness. The reason they are unsafe is because they are created by humans, who make mistakes. You are basically proposing that all technology innovation should cease until these things are 100% secure.


We just have to accept the trade-offs of functionality. Things like airplane autopilot systems and medical equipment are probably pretty close to perfect, but that level of correctness in your browser, much less your operating system, much less your hardware - would be very costly.

Even if every computer was running an operating system that was developed as carefully as equipment in charge of people's lives, I'm sure there'd still be vulnerabilities. The low hanging fruit is relative, and there'll always be people cracking computers.


I think it's more like saying maybe before massively increasing the browser attack surface we should consider if it's really the right thing to do. A lot of sandbox escapes in recent years have been in very rarely used features like WebGL and Native Client. Cool tech for sure, but also big new exploit zones.

The main problem is that what used to be called "mobile code" before smartphones were a thing is really convenient. That's why Java tried it too. Sandboxed code helps a lot, so there's this constant tension between trying to make the sandbox less restrictive and keeping it secure.

Sometimes I think that despite poor execution the JVM guys had the right idea. Sandbox code from the net, but also have code signing to fall back on.


Care to back that up? Please list this "lot" of WebGL and Native Client exploits. AFAIK there's been < 5 total over 5 years. Compared to the total number of exploits that's certainly doesn't seem like WebGL nor Native Client are an issue.

As for rarely used, I'd guess Google Maps is a pretty well used site that uses WebGL.


Well, just search for "webgl cve", there seem to be quite a few. It exposes 3D drivers to untrusted code, and they can run to millions of lines of code.

You're right, Google Maps is a good example of WebGL use. But it could also just be a regular desktop app. People would download it just fine.


That would put us at least 20 years from any new HTML features.


Not to mention the fact that somebody would just go ahead and implement them (security be damned), defeating the whole purpose.


Remind me why we need new HTML features?

The web looks and works pretty much how it did 10 years ago. Almost every new feature is just a replacement for an old feature that did the same thing but not as well.

We've gone from plugins to add features to building the features right into the browser. We've gone from tens of megabytes to hundreds of megabytes just to render some text and images. There is literally too much code to audit. I think even kernel hackers would have trouble understanding the complexity of modern browsers.

In 20 years, do you expect the web will be significantly different than it is now? The interface might change, but you can't really innovate on text and pictures.


Can't tell if serious or joking.

Although the length of your post worries me that you might actually believe what you are saying.

Please check out the Bananabread example on mozilla's developer page (just search it on your favorite search engine). Please do not post such comments again, there are people on here that will not realize you're joking.


> Although the length of your post worries me that you might actually believe what you are saying.

> Please do not post such comments again, there are people on here that will not realize you're joking.

What is so dangerous about someone having the opinion that HTML shouldn't get more features? This "be careful what you say"-tone is so weird to me, on this topic.

> Bananabread

A 3D shooter in the browser. So what? Yes there are advantages to having this power, but not everyone is going to agree that it is worth it.


A lot of sites are still supporting old browsers like ie7/8, which means they have to lower their tech to the lowest common denominator. So it's not too surprising that it feels the same, especially if you happen to be browsing in one of them!

Having to support old tech lengthens development and testing time quite a bit. We need more browser features so we can write less code, only write it once, and distribute self-contained components.


First of all, you don't distribute self-contained components, you distribute text, markup, and media. The browser renders it. Second, how much technology do you think we need to do this? Should we be talking to NASA? Third, supporting old tech is important and inevitable.

A website is literally just text and pictures. And video. That's it. We really don't need it to be so massively complicated, to suck up so many resources, and to be such a security nightmare.

In 20 years, your browser will probably use a gigabyte or two of ram minimum, require four CPU cores, and consist of several hundred thousand lines of code - maybe a million.

To render text.


> First of all, you don't distribute self-contained components, you distribute text, markup, and media.

Yes and that is one of the exciting innovations being worked on right now, web components. That's exactly why I brought it up.

Some websites are just text and pictures, it is true. We call them brochure-ware. But not the sites I work on. I build applications for managing your entire business. They are just text in pictures in the same way that a PC is just text and pictures. Should we halt innovation on computers in general? NASA's flight control systems are really just calculators if you break it down as you have done with the web.

At the worst you sound anti-technology, and at best, snobbish about your area of specialty. Implying that whatever you work on is important, but things others work on are 'just text'.


> At the worst you sound anti-technology,

Aah, a Luddite, seize him!

Maybe it's not anti-technology as much as anti-cram-everything-into-the-browser. Some people think that full-fledged applications should be kept out of the browser, but that doesn't mean that they think that full-fledged applications shouldn't exist at all.


>A website is literally just text and pictures. And video. That's it.

And CSS, which supports transform and animation and is inching towards Turing completeness, and javascript, and maybe HTML5/Canvas and WebGL.

Like as not, we seem to be converging towards a point where websites are as much applications as documents.


.... which is exactly what I propose is the problem. Those are the extra features that we bought at the cost of users' security and privacy. Even if those are desirable features, it's not clear that they should be a part of the web rather than an orthogonal distribution mechanism. Even if they are desired to be a part of the web, it's not clear that it was worth it to prioritize them over the security and privacy of users' systems.


It's not clear because it's subjective. You and I would prefer if security and privacy were more prioritized, but the reality is that services that put an emphasis on such things have been available for decades, and they have been a flop outside of very specific audiences.

I'm not saying this excuses the industry, but that doesn't change incentives. If you want more secure services, you need to change the preferences of the users.


I don't think that is true. We had security and privacy problems before we had extra features. These problems will always exist, whether we innovate on other stuff or not.


> We need more browser features so we can write less code, only write it once, and

throw it all away and rewrite it all again in two years when even newer features get added to the browsers!


Sounds good to me.


>Transferring documents online is not that hard a problem.

Here's an idea: make your own browser that competes with Chrome for usability, offer a huge prize for anyone that hacks it. It's easy, right?


>Transferring documents online is not that hard a problem.

Transferring is the easy part. Rendering them is the hard part.


That would mean: "every program that processes user supplied data should be invulnerable to hostile input".

Suddenly it doesn't sound so reasonable.


For a first cut point, CP67 permitted running any code at all, including CP67, safely. At least once CP67 ran on CP67, ..., on CP67, 7 levels deep, on the IBM 360/67.

CP67 was an abbreviation for control program 67 for the IBM 360/67 computer of, right, about 1967 and was developed by the IBM Cambridge Scientific Center as a means of interactive, time-sharing development of operating systems.

Later commercial time-sharing services used CP67. So, could have a few dozen users, each writing whatever code they wanted, e.g., assembler since it was a good environment to make developing assembler code easy, with, as far as I know, no user ever hurting the CP67 code or the work of any other user.

So, right, CP67 was the first or one of the first cases of virtual machine. Its security provisions appeared to be absolute -- run any code at all, including privileged code, e.g., including operating systems, including CP67 itself, with full safety and security.

Two years or so later there was Multics from MIT Project MAC. It featured capabilities and attribute control lists (still with us). As far as I know, they worked fine. For some years there was a Multics in the basement of the Pentagon and regarded as secure computing. Later Prime Computer did something quite similar and claimed that they had a prize for anyone who could break their security.

Later IBM revised CP67 and called it VM, and since then it has been from common to standard for the IBM mainframe operating systems to run on VM instead of on the bare metal. There have been decades of high end production systems running on VM. I haven't heard about any security holes.

As of a few years ago, a list from Microsoft of security holes fixed included at least one based on a "buffer overflow" bug. Gads: If as recently as a few years ago Microsoft still had buffer overflow bugs, one has to question if by then they much cared about computer security at all. Buffer overflow bugs -- that's Programming 101 for middle school.

I just checked Firefox 35.0.1 and couldn't find where to turn off JavaScript. For the Web pages at my Web site, I have so far not written a single line or character of JavaScript and hope never to, although Microsoft's ASP.NET does write some for me -- and I do wish I knew what ASP.NET classes or options I used to cause Microsoft to write any JavaScript for me.

The old first rule of computer security was to separate code and data, and never but never let data from an untrusted source run as data. Never. Not once. For any reason.

So, in light of this first rule, we have JavaScript downloaded with the HTML text and markup of a Web page and executed. So, I want a very, very, very clear, careful, rock solid, expertly reviewed, as close as possible to proofs of correctness evidence that it is impossible, absolutely, positively totally, without any possibly of exception, for code, any code at all that could possibly exist, in JavaScript that could cause my computer any problems at all, and otherwise just block it. Nothing to do with it. Dump it.

Computer viruses have cost me about half my time so far this year, and I want nothing to do with more computer viruses. JavaScript? Totally glad to junk it.

As I recall, not so long ago, Firefox had Java enabled by default! Outrageous. And at times in Microsoft's Internet Explorer had to be careful to disable Active X, which could run any code at all.

JavaScript? I want nothing to do with pull downs, pop-ups, roll-overs, the screen jumping around for no good reason, etc. Just HTML and CSS -- fine with me.

HTML was just a word-whacking mark-up language -- fine. But, yes, for user input it has text boxes, multi-line text boxes, check boxes, and radio buttons. Okay. Should be able to implement those safely enough.

Can we start to take computer security seriously? When are we going to start?

E.g., Microsoft has gone from Windows NT to 2000, XP, Vista, 8, 8.1 etc., but where in there is the solid security? They have the Start button, remove it, put it back, the Metro interface, tablets, phones, etc., but what about computer security?

I don't give even one weak little hollow hoot about touch screens, but I care a lot about computer security.


   CP67 ... security provisions appeared to be
   absolute -- run any code at all ... with full
   safety and security.

   There have been decades of high end production
   systems running on VM. I haven't heard about
   any security holes.
Sadly, that's not true at all. I ran both CP67 and VM oh so many decades ago, and it was quite useful as a means for non-malicious people to share expensive computers. But they had vulnerabilities.

Once upon a time, perhaps 40 years ago (it's been very long, my memory is hardly exact) some IBM employees/researchers started with either CP67 or VM source code (I forget which) and found literally dozens of bugs and/or usable exploits for it.

Source code was not an unreasonable starting point, since IBM published all this code on microfiche, and also probably on mag tape.

IIRC many/most of the exploits revolved around emulating the I/O channel architecture, channel programs, and corresponding SIO or SIOF instructions.

They wrote the whole thing up in a prestigious publication, perhaps the IBM Systems Journal or Communications of the ACM or maybe ACM Computing Surveys. I did a very quick search but couldn't find that particular discussion.

Edit: I may have found the original paper. First I found:

A Taxonomy of Computer Program Security Flaws, with Examples

https://cwe.mitre.org/documents/sources/ATaxonomyofComputerP...

in which they reference a 1976 IBM paper that discussed many VM vulnerabilities. Here's just one of multiple times they cite that paper:

Case: I2

Source: C.R. Attanasio, P.W. Markstein, and R.J. Phillips, ‘‘Penetrating an operating system: a study of VM/370 integrity,’’ IBM Systems Journal, 1976, pp. 102-116. System: IBM VM/370

Description: By carefully exploiting an oversight in condition-code checking (a retrofit in the basic VM/370 design) and the fact that CPU and I/O channel programs could execute simultaneously, a penetrator could gain control of the system. Further details of this flaw are not provided in the cited source, but it appears that a logic error (‘‘oversight in condition- code checking’’) was at least partly to blame.


Nice update, correction, etc. Thanks.


Actually I agree with you. I think "we" abandoned Harvard hardware like the Burroughs too soon and should consider more security in the hardware design that also doesn't rely on crypto. Ihe experience I have is the AVR micro processors which have the code blown into the NV flash.

Of course, the first thing someone will write for it is LISP, thus invalidating your "old rules of code" by using "old code design".

These days, one can run OS from an FPGA without impossible effort. If I had significant financials at stake, that's what I would be doing.

I have had no losses, time or money, from viruses, worms, trojans or otherwise in my 25 years of computer use. I must be just lucky.


Buffer overflow bugs may be Coding 101 now, but they certainly weren't back in 1967 when CP67 was written. I don't think people were really aware that they were an exploitable security issue back then, nor had the term "buffer overflow" been coined. Part of the reason computer systems seemed more secure back then was because there were much fewer people who could attack them and we had a much worse understanding of how to do so.


> Part of the reason computer systems seemed more secure back then was because there were much fewer people who could attack them and we had a much worse understanding of how to do so.

And because the attack surface was much smaller: No TCP/IP, DVDs, thumb drives, etc.


Is nobody paying for Linux bugs or weren't any found?


Very very few people run Linux a desktop(unfortunately).

The Chrome and Firefox bugs are probably exploitable on Linux too.


If you're interested in doing some research, just google "use after free". Google has a message for you.


This is one of the core reasons computers need to be reinvented.


It's simple. Just stop using memory-unsafe languages. Just say no.


Anyone knows if mobile OSes are Part of the competition ?

I've always thought surfing on safari mobile was the most secure way of surfing (because of the sandbox, but also partly because of app store pre-release evaluation policy that limit the risks of installing a malware ).


There was a method to jailbreak the iPhone by exploiting some vulnerabilities some years ago, all you had to do was go to a website from safari if I recall correctly. iOS is mainly OS X without unsandboxed apps, no silver bullet.


And the jailbreak fixed it for users as well, which was cool :).


dumb question: "more ht2000 lines of code" - is that a typo of "more than...", is "ht2000" something I don't know and can't google, or are are these hacks in some way related to an antique motherboard?


I'd say it's a safe bet they meant "more than 2000 lines of code"


Curious to know how WhiteHat Aviator(https://www.whitehatsec.com/aviator/) would perform there, any ideas?


Vulnerable to the same bugs as Chrome.


I can't seem to find any info on versions - Safari 8.0.4 was released a couple of days ago, does the exploit affect this security release?


It's my understanding that any vulnerability that has been disclosed is ineligible. If 8.0.4 patched the exploit that won here, it would have been merely by coincidence.


What about Opera?


Nobody uses Opera, why add it to this competition?

But since you asked, I would be a lot of money Opera is vulnerable to the same bugs as Chrome since it's using the same rendering engine.


Good to see microsoft products on top of at least one comparison against opensource alternatives.


Is this the correct headline? I would expect something more like "Apple Safari falls at Pwn2Own Day Two (Other Browsers Too)"


The real title hear is ""teams coordinate simultaneous disclosure of vulns targeting all major browsers".


Not sure what you're implying. Is ThreatPost supposed to be anti-Apple?


Its a common trend in tech report to mention/place stories in the context of Apple, because they are the largest company in the world (by revenues), and so it drives page views. Even when Apple are only tangentially related to the story.

Perfect example of this right now are all the "Is Apple Pay causing fraud?" news stories. The fraud has nothing to do with Apple Pay. The fraud is happening because banks are doing a poor job verifying who someone is when they are trying to register/activate a credit card. But they all focus on Apple, despite being a minor aspect of the story, because it drives page views.


FWIW Apple aren't even in the top 10 for revenues. Most are oil or car companies. Apple is #14.

http://www.statista.com/statistics/263265/top-companies-in-t...

Top market cap, yes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: