FTA: The Broadcom chipset contains an MPU, but the researcher found that it's implemented in a way that effectively makes all memory readable, writeable, and executable. "This saves us some hassle," he wrote. "We can conveniently execute our code directly from the heap."
How in this decade, with all we know and have learned about security and exploits, can this kind of thing still happen?
Especially when hardware makers make money off it. I'll need to replace my phone soon, and I'll make an effort to buy something that doesn't use broadcom.
Targeted attacks are hard enough to fend off on systems that are fully up-to-date. What chance does an out-of-date locked-down phone have?
If you're not allowed to control and secure your phone yourself, the next best thing is to go with uncommon tech.
Security-by-obscurity seems to be back in fashion :(
I'm also avoiding broadcom as a "vote with my wallet"-type of punishment for being reckless with people's property. If enough people react similarly, broadcom will learn from its mistake. If not, at least I've done my part.
What's the possibility of using this exploit to patch the vulnerable Android systems?.. or to root the phone? It'd be an interesting solution against the time we'll have to wait for carrier services/manufactures to straighten themselves up.
(Say, if, oh I don't know... Knox and My Verizon got disabled or removed, Verizon would have no proof to void my warranty. It was Starbucks' wifi, promise!)
The risk is if your payload is not perfect in all cases. Taken to the extreme, you brick a mission critical device and cost a life. This is why malware researchers don't write clean up code once they do C2 takeovers. If their cleanup command wrecks a medical device or some other ancient box still running XP, that's a lot of liability.
It's not quite as simple as "giving the user full control".
Doing so requires you to disable many checks and safeguards against other more prevalent kinds of attacks. Having an unlocked bootloader, unsigned OS, modified system partition, and putting all of the power of root behind one closed source binary...
I know many wouldn't see it as malicious (especially if it was done to someone else's device), but doing it without someone's knowledge or consent does seem to cross into that territory, especially if the user didn't understand or value the benefits.
I always thought the idea of "coral malware" (made it up, cause it grows and hardens as it goes) was interesting.
Essentially a worm, but one that patches the exploit it uses to infect, and then after X shares, deletes itself except for the patch. Eventually a "reef" of hardened remains covers the platform.
Illegal, because it is still malware, but an interesting avenue that hacktivists could explore.
I thought the same about IoT, but of course after some searching it's already been made. All of those "IoT compromised" things you read, the compromising party also changes the default password so they are the only ones.
It's not going to happen. Google makes a ton of money off the fact that people dispose cell phones ever two years, so they're never going to patch their insane OS model. I write about it a bit here:
..but the gist is that ARM isn't a PC. There's no platform. You've got random pins connected to random shit and only a subset of devices that use Device Tree. Even the ones that do still tend to have a ton of binary blobs and weird kernel patches that can't be up-streamed.
In a way, Windows/x86 makes a lot more sense. You have a base operating system. Any time you buy a new machine, you can wipe it, install the drivers and you've got a nice bloat free OS (until Win10 Advert edition anyway).
You can't just run AOSP. There have been cases where AOSP releases on master have failed to compile or require binary blobs. Maybe this weird Fuchsia crap Google is working on will provide a standard/stabilized kernel an ABI, but I wouldn't count on it.
At least Microsoft phones required UEFI, but they have locked bootloaders. If we wanted decent 3rd party fully open operating systems, I'd say the old Nokia devices would be the way to go if the bootloaders could be cracked.
> Google makes a ton of money off the fact that people dispose cell phones ever two years...
Huh? I am pretty sure Google subsidizes their Nexus phones (not that they actually sell many) and I thought they didn't charge volume fees for their licensing program... they pretty much run Android to make sure that Apple doesn't take control of all mobile by having a unified API (which for everyone else had to be cobbled together from a diverse and often competing set of manufacturers: no small task) and then use that power to lock Google out of the thing which actually makes them money, which is advertising revenue from tracking capabilities. Other than "it is great for our software if phones were faster", I can't imagine Google would care, financially, if everyone were using older Android hardware and never sold a new device again as long as they could update their software; do you have any financial mechanism for your assertion? When I have last bothered to really pull apart all of the statements, essentially all of the actual profit in the world of Android goes to Samsung.
Of course Google profits in many ways from more people having the most powerful smartphones possible.
Throwing away an old phone and buying a new phone doesn't have to directly fund Google for it to be undeniably profitable to Google.
That's not to say at all that Google is trying to arguably brick people's phones to force them to upgrade more. That makes no sense at all. They just don't have enough control over the ecosystem to fix the upgrade problem.
> There have been cases where AOSP releases on master have failed to compile or require binary blobs.
I wouldn't expect master to be stable, for Android or any other project. If you were setting up a new Linux machine and compiling the kernel yourself, surely you wouldn't build master? AOSP has release branches and RCs like android-7.1.1_r38 which will have gone though a bunch of QA.
Most users wouldn't like stock AOSP though, since the built-in apps are very primitive.
but you should expect it. any sane development environment will require a code review and build verification before code is submitted to master. there's only a problem when that system is overriden because of impossible deadlines, mandated crunch times and other stupid business practices.
The platform team does do code review and automated tests, but occasionally something slips through, so they QA release branches as well.
Is there any major app (say, at least 1 million SLOC) which has such good automated tests that they're comfortable shipping master without any manual QA? I'd be really surprised if there is.
Even if a team is totally committed to automating all their testing, there are some big challenges with doing so at scale:
- You run into all kinds of rare bugs (with AOSP, the emulator, host OS, etc.). When I worked at Square our biggest app had ~5 hours of automated tests, and these rare bugs made it very hard to keep our builds stable. I imagine thoroughly testing AOSP would take an order of magnitude longer.
- If you want to test on a variety real phones (which seems critical for AOSP), then testing every master commit is cost prohibitive.
- It's hard to automatically detect graphical glitches. You can take screenshots at certain times and validate that they're pixel perfect, but you might still miss a glitch in between screenshots, e.g. during an animation.
What's murky to me is how to tell whether I'm affected. I have a BLU R1 HD from Amazon which GSM Arena reports [0] has a Mediatek MT6735 SoC. Does that mean I'm in the clear? I don't see any mention of Broadcom components, but I guess there could be one packaged into something on the list.
Android security isn't abysmal. Third-party (non-Google) Android phones' security might be abysmal, but we should compare it to third party iPhones' security.
No! It's not about "compared to the alternatives", it should be, "compared to what it could be, without hideous restrictions to the end user". And in this regard, I think it probably does count as abysmal.
If I remember correctly, android inc / google set out (with Android) explicitly to create a rebrandable reasonably customisable OS base that all the phone manufacturers could use, modify, set up for their phones, etc.
It was intended, and certainly branded to device manufacturers as an OS they can use and 'get all the apps'.
The hard/firmware side is part of the OS. By allowing Android to be run on any device w/o decent security, the OS inherits that.
A better solution (to my mind) would have been to restrict manufacturers from using the Android brand unless they guarenteed a certain level of support and security updates, etc.
Also, having a central (google level) set of base packages which get constant updates and are pulled into all phones, regardless of brand.
Remember Stagefright? What ever happened to that? The bloggers promised us all an impending Android security armageddon (and you didn't even have to be on the same WiFi network as this one requires). According to Google's SafetyNet, that gathers telemetry from over 1.5 billion devices, not 1 occurrence of Stagefright has ever been seen in the wild. Think about that for a second.
It turns out developing an exploit, chaining it with other exploits to get around the Android security mitigations and then trying to make it work on devices running different OEM builds of Android was more difficult then people originally thought.
Moral of the story? Next time you hear horror stories about how susceptible your device is to an exploit just remember Stagefright and what a dud that turned out to be.
Not really. Unless the carrier prevented the sending of MMS messages there wasn't anything they could do. About the only change made to most apps was to set the MMS auto play setting to off. Stagefright never amounted to anything due in most part to the Android security mitigations and the differences in each version of Android built by the OEM.
Wait, are you saying MMS messages are direct peer-to-peer? My understanding is, they go through the operators' MMS servers, where all sorts of antiviral activities can take place.
I'm unclear -- does the victim have to actively connect to a malicious network, or does work even if the victim is unconnected/connected to another network?
> By using the frames to target timers responsible for carrying out regularly occurring events such as performing scans for adjacent networks, Beniamini managed to overwrite specific regions of device memory with arbitrary shellcode.
This implies that the exploit can happen when the phone is just scanning for list of available networks.
>Two of the vulnerabilities can be triggered when connecting to networks supporting wireless roaming features; 802.11r Fast BSS Transition (FT), or Cisco’s CCKM roaming.
From reading the actual Project Zero post yesterday, the exploit was figured out using the fast BSS transition which I think is for high frequency p2p transmission, i.e. sending a video to your Chromecast. So you still have to be connected to the same network.
There's some hidden UI trickery you have to do to download the latest update. I forgot the series of gestures I had to do to get this patch when it first came out, but there's a way :)
Probably not - Lineage can only patch issues that are present in the open source code. Since this issue exists in the firmware, it's up to device manufacturers to release an update including the fixed firmware, which Lineage can then incorporate.
This is an exploit within the wifi driver? So on iOs this would be a kext, and Apple controls all of that, so they can update and resign the kext and issue it in an update.
Whereas Google can't do that; it'd be up to the handset manufacturer.
Is this an example of the problem Google faces using the Linux kernel in Android? It cedes the kernel to the manufacturer, and wifi (and bluetooth etc) drivers are all kernel space, not user space. So Google can't really do anything here can they?
OK I'm not familiar with this form of firmware loading. What I see on Linux laptops with proprietary firmware blobs being needed by Broadcom wifi hardware, that firmware lives in /usr/lib/firmware/ and it's up to the kernel driver to load the most recent firmware.
I guess this must work differently on Android handsets with Broadcom wifi, if it's not possible to just push a new firmware bin file into /usr/lib/firmware? Presumably Google could do that, although I'm not sure what the code signing requirments are for firmware blobs such as this.
In the case of some Android devices I've looked at, what you described is how they work too. The problem is that the firmware path is on a partition that's normally not writable except by signed updates. I think they'd have to be signed by e.g., Samsung, rather than Google or Broadcom, but Google probably has a key on the approved list in some devices.
Savvy users with root can probably patch it themselves, if they can find the correct fixed firmware binary and know where it's stored on their device.
What I wrote: "Whereas Google can't do that; it'd be up to the handset manufacturer."
Your first statement isn't congruent with your second statement. That they push it to their devices alone does not mean they "can and did" apply the update to all Android devices. My whole post is that they can't push kernel drivers to handsets that aren't theirs.
I mean, Windows has way more drivers than Android and yet somehow I don't have to wait two years for hardware vendors to come up with special versions of Windows updates. Is that really an insurmountable problem?
To their credit, Microsoft is massively invested in driver support, retro compatibility and patches. They have a massive support ecosystem for and around hardware vendors.
Google has nothing close. Even if they tried, it could take a decade to get there.
Windows Phone doesn't run on a fraction of SoCs that Android does. You are basically limited to a Snapdragon and that's it. There is no such rich ecosystem like in Android, ranging from MediaTek to Nvidia.
If he meant PC-compatibles, that's even worse. Desktops and laptops have something common: BIOS or UEFI, which standardize both boot process and plaform services, and enumerable buses like PCI. Nothing of this sort is available for ARM/Sparc/whatever boards.
It's like criticizing Windows, that it doesn't run on Chromebooks or PPC Macs.
The issue is not whether it is possible or not. Of course, it is possible.
The issue is whether it will result in better product, where better is defined as "will it sell more?". Your average cellphone buyer does not care about that. But he cares about thinness and battery life, which would be affected by a more universal platform.
Imagine a 3D chart, where one axis is battery life and small-sized hardware, second is versatility, third showing the sales, with a curve going through, showing a compromise between two trade-offs and total sales for that compromise. We are currently in one point, with few people arguing to change the position to another point, completely ignoring these two other axes.
It's just overkill. Most users do not need to authenticate to multiple locations, be authorized for discrete network access, and then have their usage of said networks accounted for. Most users just need a shared secret to get on the internet. (And yes, RADIUS and its related backend components do take up more resources)
I would call those SMB networks, not enterprise. But even so I'm kind of surprised by that since even some crappy home gear I've purchased has RADIUS support buried in advanced settings.
no headline bias there. ios devices were vulnerable until patched, very recently. the patches are built for android and it's up to the device manufacturers to get them into the wild.
Overall I like our title (currently "Android devices can be fatally hacked by malicious Wi-Fi networks") but "fatal" has a specific meaning (causing death) so isn't there some other word in place of that one that could be used here? It is certainly evocative of how dire this is but I feel like a different word might be more specific without losing gravity.
How in this decade, with all we know and have learned about security and exploits, can this kind of thing still happen?