3. I stopped caring and learned to love the algorithm in 95% of normal typing. The result is that my typing speed is up but my accuracy has plummeted, yet my typing output is generally correct because of autocorrect.
Unfortunately this falls apart when I try to type anything that isn’t common English words: names, code, rare words, etc.
I also think that the keyboard could learn the different “rhythms” of typing - my normal typing which is fast and practically blind, and the careful hunt and peck which is much slower and intended for those out-of-distribution inputs. I bet the profile of the touch contacts (e.g. contact area and shape of the touches) for those two modes looks different too.
My strategy for a time was disabling autocorrect and perfect my accuracy, but this was stumped because indeed, it's harder to type these days than when the screens were smaller and less precise, it seems to pick adyacent keys on a whim.
So I realized I had exchanged correcting the same word four times in a row to correcting the same letter four times in a row.
Why is it hard? In principle you render an image instead of discrete buttons, and do your hit testing manually. Sure, it’s more annoying than just having your OS tell you what key got hit, but keyboard makers are doing way fancier stuff just fine (e.g. Swype).
Apple's keyboard receives more information, to put it simply. It doesn't get told that a touch was at a particular point, but the entire fuzzy area. Allowing you to use circular occlusion and other things to choose between side-by-side buttons and override the predictive behaviour when it is the wrong choice.
A third-party maker gets a single point - usually several in short succession, but still it requires more math to work out where the edges of the finger are pressing, to help determine which direction you're moving. So most just... Don't.
I mean, there is a reason why these sorts of constructs are UB, even if they work on popular architectures. The problems aren’t unique to IA64, either; the better solution is to be aware that UB means UB and to avoid it studiously. (Unfortunately, that’s also hard to do in C).
to discover at least two magical registers to hold up to 127 spilled registers worth of NaT bits. So they tried.
The NaT bits are truly bizarre and I’m really not convinced they worked well. I’m not sure what happens to bits that don’t fit in those magic registers. And it’s definitely a mistake to have registers where the register’s value cannot be reliably represented in the common in-memory form of the register. x87 FPU’s 80-bit registers that are usually stored in 64-bit words in memory are another example.
I no real complaints about CHERI here. What’s a pointer, anyway? Lots of old systems thought it was 8 or 16 bits that give a linear address. 8086 thought it was 16 + 16 bits split among two registers, with some interesting arithmetic [0]. You can’t add, say, 20000 to a pointer and get a pointer to a byte 20000 farther into memory. 80286 changed it so those high bits index into a table, and the actual segment registers are much wider than 16 bits and can’t be read or written directly [1]. Unprivileged code certainly cannot load arbitrary values into a segment register. 80386 added bits. Even x86_64 still technically has those extra segment registers, but they mostly don’t work any more.
So who am I to complain if CHERI pointers are even wider and have strange rules? At least you can write a pointer to memory and read it back again.
[0] I could be wrong. I’ve hacked on Linux’s v8086 support, but that’s virtual and I never really cared what its effect was in user mode so long as it worked.
[1] You can read and write them via SMM entry or using virtualization extensions.
The bigger problem is that a user cannot avoid an application where someone was writing code with UB, unless they both have the source code, and expertise in understanding it.
Siri does have documentation: https://support.apple.com/en-ca/guide/iphone/ipha48873ed6/io.... This list (recursively) contains more things than probably 95% of users ever do with Siri. The problem really boils down to the fact that a CLI is imposing enough that someone will need a manual (or a teacher), whereas a natural language interface looks like it should support "basically any query" but in practice does not (and cannot) due to fundamental limitations. Those limitations are not obvious, especially to lay users, making it impossible in practice to know what can and cannot be done.
Well that's largely theoretical and Siri needs largely more input than is worth the trouble. It lacks context and because of Apple focus on privacy/security is largely unable to learn who you are to be able to do stuff depending on what it knows about you.
If you ask Siri about playing some music, it will go the dumb route of finding the tracks that seems to be a close linguistic match of what you said (if it correctly understood you in the first place) when in fact you may have meant another track of the same name. Which means you always need to overspecify with lots of details (like the artist and album) and that defeat the purpose of having an "assistant".
Another example would be asking it to call your father, which it will fail to do so unless you have correctly filled the contact card with a relation field linked to you. So you need to fill in all the details about everyone (and remember what name/details you used), otherwise you are stuck just relying on rigid naming like a phone book. Moderately useful and since it require upfront work the payoff potential isn't very good. If Siri would be able to figure out who's who just from the communications happening on your device, it could be better, but Apple has dug itself into a hole with their privacy marketing.
The whole point of an (human) assistant is that it knows you, your behaviors, how you think, what you like. So he/she can help you with less effort on your part because you don't have to overspecify every details that would be obvious to you and anyone who knows you well enough.
Siri is hopeless because it doesn't really know you, it only use some very simple heuristic to try to be useful. One example is how it always offer to give me the route home when I turn on the car, even when I'm only running errands and the next stop is just another shop. It is not only unhelpful but annoying because giving me the route home when I'm only a few kilometers away is not particularly useful in the first place.
My first instinct, knowing less about this domain than maybe I should, would be to abuse the return address predictor. I believe CPUs will generally predict the target of a “ret” instruction using an internal stack of return addresses; some ARM flavours even make this explicit (https://developer.arm.com/documentation/den0042/0100/Unified...).
The way to abuse this would be to put send() on the normal return path and call abandon() by rewriting the return address. In code:
This isn’t exactly correct because it ignores control flow integrity (which you’d have to bypass), doesn’t work like this on every architecture, and abandon() would need to be written partly in assembly to deal with the fact that the stack is in a weird state post-return, but hopefully it conveys the idea anyway.
The if in predict() is implementable as a branchless conditional move. The return address predictor should guess that predict() will return to send(), but in most cases you’ll smash the return address to point at abandon() instead.
If I read this correctly, they’re “bypassing ASLR” because the binary isn’t PIE, so it’s loaded at a static address.
I would not consider this actually bypassing ASLR, because ASLR is already turned off for a critically important block of code. Practically any large-enough binary has gadgets that can be useful for ROP exploitation, even if chaining them together is somewhat painful. For ASLR to be a reasonably effective mitigation, every memory region needs to be randomized.
Yeah :/ that’s how I read it too. It would make more sense if they motivated the reason to find libc because like you said you could likely just use the non aslr gadgets exclusively. I think the author tried to use non aslr gadgets but had issues so went to the approach of using the GOT libc address and called that approach “bypassing ASLR”.
It’s a matter of opinion I guess. In the early days of ASLR it was common to look for modules that were not position independent for your ROP chain and that process was probably called bypassing aslr. These days we’d probably just call that not being protected by aslr.
This is a bit interesting in how it doesn't require further interactivity with the attacker once the libc address has been obtained, unlike most basic ROP examples, which I've rarely seen require anything fancier than return-to-main. The more the chain does in a single pass, the more it might need gadgets smarter than "set register to immediate and return".
I am so glad that I insisted on buying a car with CarPlay five years ago. At the time a number of our options did not have CarPlay, but were otherwise quite solid cars. If I'd gone with any of them I'd likely be a lot less happy than I am now: given that I use CarPlay on literally every drive, it's probably the single most important feature to have.
I get that GM doesn't want to cede the important center console to third parties because it feels like giving up their control, but man, is it ever going to be the wrong choice for them.
I agree with you that it's the wrong choice, but it's not just about ceding control. It's also about ceding the revenue.
For example, to connect their system to the internet, that'll be $20/mo. I'd guess GM gets a large portion of that revenue. If you're using CarPlay, there's no reason for you to buy their service.
It looks like GM makes around $1,000 in profit per vehicle. If half of their customers give them $20/mo for a decade, that's $1,200 in additional revenue. If AT&T takes half of that, it's still $600 which is a solid boost to their profits.
Now, you might say that fewer people would buy their cars and I'd agree - but companies make short sighted plays all the time that backfire. Someone does the kind of back of the envelope math that I did above and says "omg, I can increase our profits by 60% with this one easy trick" and it's wrong because the world doesn't work like that, but you put together some consultants and consumer surveys that are favorable and you get the green light.
I know: GM is just killing their relationship with consumers. I agree with you. But think about what Unity did to their developers. Unity saw the chance to charge a fee every time a game was installed and all the money that would bring - and didn't think about the predictable developer backlash. Companies do these types of things.
I know I'm slippery sloping but I wonder if they won't get rid of bluetooth and aux ports in the future. Letting people play spotify on their phone's data connection is money on the table when they could be selling their own data plans, getting a cut from their own app stores etc.
My manual Spark is pretty fun and beats Civic Sis and other fast cars in rallycross. I have done 100+ redline clutch dumps in that car. It still drives fine.
It's about GM and Google getting the data (https://www.motortrend.com/features/apple-carplay-android-au...). Switching from Android Auto and CarPlay to the Android Automotive OS (AAOS) means the auto manufacturer gets the data that was going through the phone.
CarPlay is a purchasing factor for me personally. I've always liked Volvo, but now that they all run AAOS the last few times I rented one I had to reboot the head unit when I got in the car to get CarPlay to work. Funny how vehicles running AAOS don't really integrate well with a competitor...
O/T, but I'm getting a cert error on this page - wonder if it's just me or if this site is just serving a weird cert. Looks like it's signed by some Fortinet appliance - maybe I'm getting MITMed? Would be kind of exciting/frightening if so.
EDIT: I loaded the page from a cloud box, and wow, I'm getting MITMed! Seems to only be for this site, wonder if it's some kind of sensitivity to the .family TLD.
Unfortunately this falls apart when I try to type anything that isn’t common English words: names, code, rare words, etc.
I also think that the keyboard could learn the different “rhythms” of typing - my normal typing which is fast and practically blind, and the careful hunt and peck which is much slower and intended for those out-of-distribution inputs. I bet the profile of the touch contacts (e.g. contact area and shape of the touches) for those two modes looks different too.
reply