From the first time the story surfaced, for spurious reasons[1] she was booked as fugitive, and that made it so that there was "no need" for normal timeframe of hearing.
[1] The reason being that she was found in Tennessee while being searched for a crime in another state, thus allowing them to treat it as interstate fugitive from a crime scene
Motorola had made a few design mistakes, like adding memory indirect addressing in MC68020, which were removed much later, in the ColdFire Motorola CPUs.
But Intel had made much more design mistakes in the x86 ISA.
The truth is that the success of Intel and the failure of Motorola had absolutely nothing to do with the technical advantages or disadvantages of their CPU architectures.
Intel won and Motorola failed simply because IBM had chosen the Intel 8088 for the IBM PC.
Being chosen by IBM was partly due to luck and partly due to a bad commercial strategy of Motorola, which had chosen to develop in parallel 2 incompatible CPU architectures, MC68000 intended for the high end of the market and MC6809 for the low end of the market.
Perhaps more due to luck than due to wise planning, Intel had chosen to not divert their efforts into developing 2 distinct architectures (because they were already working in parallel at the 432 architecture for their future CPUs, which was a flop), so after developing the 8086 for the high end of the market they have just crippled it a little into the 8088 for the low end of the market.
Both 8086 and MC68000 were considered too expensive by IBM, but 8088 seemed a better choice than Z80 or MC6809, mainly by allowing more memory than 64 kB, which was already rather little in 1980.
In the following years, until 80486 Motorola succeeded to have a consistent lead in performance over Intel and they always introduced various innovations a few years before Intel, but they never succeeded to match again Intel in price and manufacturing reliability, because Intel had the advantage of producing an order of magnitude more CPUs, which helped solving all problems.
Eventually Intel matched and then exceeded the performance of the Motorola CPUs, despite the disadvantages of their architecture, due to having access to superior manufacturing, so Motorola had to restrict the use of their proprietary ISAs to the embedded markets, switching to IBM POWER for general-purpose computers.
Analysis of issues in making more performant 68k and VAX are major part of what led to RISC development, with complex addressing (even in earliest 68000) being part of the problem. People think of x86 as CISC when reading about CISC vs RISC, but x86 was not much of a consideration when industry was switching to RISC-style designs - it was hitting walls on complex ISAs, especially VAX (which was allowed to live for way too long), but also to an extent 68k.
N.b. 68000 was supposed to be a 16bit extension of 6800, which among others resulted in hilarious two layers of microcoding.
AS for IBM PC, 68000 had major flaw of being newer while 8086 had been available for longer and with second sources - 68000 was released at the same time as reduced capability 8088, while equivalent reduced capability model for 68k arrived in 1982.
68k did not resemble VAX at all, it was considerably simpler. 68k and the other Motorola CPUs resembled a lot the earlier PDP-11, not VAX.
Both MC68020 in 1984 and 80386 in 1985 have added to their base architectures various features taken from VAX, e.g. scaled indexed addressing. MC68020 has added slightly more features from VAX, e.g. bit-field operations, while 80386 has added only single bit operations. However none of the few features taken from VAX has made 68k more difficult to implement or less suitable for high speed implementations.
The wrong feature added in MC68020, which had to eventually be removed later, which consisted in the memory indirect addressing modes, was not taken from VAX. VAX did not have such addressing modes, only some much earlier computers had such addressing modes. Those addressing modes were added by someone from Motorola without being inspired by VAX in any way.
The VAX ISA was more difficult to decode at high speed, because it used byte encodings, like x86, but the VAX ISA was still much easier to decode at high speed than x86. The 68-k ISA, which used 16-bit encodings, was much easier to decode than x86, being intermediate in ease of decoding between a RISC ISA and VAX. The x86 ISA is probably the most difficult to decode ISA that has ever been used in a successful product, but at the huge amount of logical gates that can be used in a CPU nowadays that is no longer a problem.
The reduced capability variant of MC68000, i.e. MC68008, has been launched too late to be useful for IBM because Motorola had not realized that this is a good idea and they have done it only after the success of Intel 8088.
Simultaneously with MC68000, Motorola had launched MC6809, which Motorola believed to be sufficient for cheaper products. That was Motorola's mistake. MC6809 had a much more beautiful ISA than any other 8-bit CPU, but at the time when it was launched 8-bit CPUs were becoming obsolete for general-purpose computers, due to the launch of the 64 kilobit DRAM packages in 1980, which made economical the use of more than 64 kilobytes of memory in a PC, for which the 8-bit CPUs like Zilog Z80 and Motorola MC6809 were no longer suitable.
Harder to optimize at microarchitectural level because each individual instruction represents way more complex execution model, including to even decode what the CPU is supposed to do.
X86 is comparatively simple, with limited indirect addressing support to the point it can be inlined in execution pipeline, and many instructions either being actually "simple" to implement, or acceptable to do in slow path. M68k (and VAX even more) are comparatively harder to build modern superscalar chip for.
The 68k family had only one bad feature, which was introduced in MC68020, a set of memory indirect addressing modes.
Except for this feature, all instructions were as simple or simpler to implement than the x86 instructions.
MC68020, like also 80386, was a microprogrammed CPU with multi-cycle instructions, so the memory indirect addressing modes did not matter yet.
Those addressing modes became a problem later, in the CPUs with pipelined execution and hardwired control, because a single instruction with such addressing modes could generate multiple exceptions in the paged MMU and because any such instruction had to be decoded into multiple micro-operations in all cases.
For embedded computers, backwards compatibility is not important, so Motorola could correct this mistake in the ColdFire CPUs, but for applications like the Apple PCs they could not remove the legacy addressing modes, because that would have broken the existing programs.
Besides the bad memory indirect addressing modes, 68k had the same addressing modes as 80386, except that they could be used in a much more orthogonal way, which made the implementation of a CPU simpler, not more complex.
For a corrected 68k ISA, e.g. ColdFire, it is far easier to make a superscalar implementation with out-of-order execution than for x86.
Like I have said, 68k does not resemble VAX at all. The base 68k architecture resembles a porting to 32-bit of the DEC PDP-11 architecture. Over the base architecture, MC68020 has added a few features taken from VAX, mainly scaled indexed addressing and bit-field operations, and a few features taken from IBM 370, e.g. compare-and-swap.
Intel 80385 has also taken scaled indexed addressing from VAX, but instead of implementing bit-field operations it has added only-single bit operations. That is a negligible simplification of the implementation, which has been chosen by Intel only because their instruction format did not have any bits left for specifying the length of a bit field.
None of these features taken from VAX has caused any problems in either the Intel or the Motorola CPUs in high-speed pipelined implementations.
> Those addressing modes became a problem later, in the CPUs with pipelined execution and hardwired control, because a single instruction with such addressing modes could generate multiple exceptions in the paged MMU and because any such instruction had to be decoded into multiple micro-operations in all cases.
This is literally my point - the people involved in shift to RISC had figured it was a problem, and one aspect that made x86 easier to optimize long term (outside of Intel's huge market share) was that x86 had at most one memory operand per instruction (with certain exceptions). m68k's orthogonality meant both decode and execution are long-term harder, especially since you're going to have to support software that already uses those features - x86 has less of a legacy baggage there by virtue of not being as nice early on.
Clean break towards simpler internal design backed by compiled code statistics led most vendors - including intel - towards RISC style. Intel just happened to have constantly growing market share of their legacy design and never committed fully to abandoning it while lucking out in their simplistic design making it easier to support it long term.
> but for applications like the Apple PCs they could not remove the legacy addressing modes, because that would have broken the existing programs.
The "FireBee" project which is/was an Atari ST-ish clone build on ColdFire ... proved this wasn't actually a huge obstacle, they made an Atari ST compatible system which provided a workable emulation mode that could still run classic 68k code. And then things like the kernel etc could be recompiled or modified in pure ColdFire mode for performance.
I have one downstairs ... tho, in fact, I never ever use it.
Certainly a lot less disruptive than the half decade or more of crashy slow 68k emulation in PowerPC that Apple was forced to do instead. That initial era of PowerPC Macs running MacOS versions prior to 8 were.. terrible.
ColdFire was/is literally that is my understanding. But there was really no market for it. They produced variants up to 300mhz if I recall but then relegated it mostly just to microcontroller market and then stopped developing it.
It was too late, and just oh-so-slightly incompatible with 680x0. But I suspect if the ISA used in ColdFire v4 had existed in 1994, 1995 that perhaps Apple's honestly disastrous foray through PowerPC could have been avoided.
«More CISC-y» does not by itself mean «harder to optimise for». For compilers, what matters far more is how regular the ISA is: how uniform the register file is, how consistent the condition codes are, how predictable the addressing modes are, and how many nasty special cases the backend has to tiptoe around.
The m68k family was certainly CISC, but it was also notably regular and fairly orthogonal (the legacy of the PDP-11 ISA, which was a major influence on m68k). Motorola’s own programming model gives one 16 programmer-visible 32-bit registers, with data and address registers used systematically, and consistent condition-code behaviour across instructions.
Contrast that with old x86, which was full of irregularities and quirks that compilers hate: segmented addressing, fewer truly general registers (5 general purpose registers), multiple implicit operands, and addressing rules tied to specific registers and modes. Even modern GCC documentation still has to mention x86 cases where a specific register role reduces register-allocation freedom, which is exactly the sort of target quirk that makes optimisation more awkward.
So…
68k: complex, but tidy
x86: complex, and grubby
What worked for x86, though, was the sheer size of the x86 market, which resulted in better compiler support, more tuning effort, and vastly more commercial optimisation work than m68k. But that is not the same claim as «68k was harder to optimise because it was more CISC-y».
Notice I didn't write harder to optimize for - I am not talking about optimizing code, but optimizing the actual internal microarchitecture.
Turns out m68k orthogonality results in explosion of complexity of the physical implementation and is way harder to optimize, especially since compilers did use that. Whereas way more limited x86 was harder to write code generation for, but it meant there was simpler execution in silicon and less need to pander to slow path only instructions. And then on top of that you got the part where Intel's scale meant they could have two-three teams working on separate x86 cpu at the same time.
Once again – respectfully – this remains largely twaddle as the facts themselves state otherwise.
Even at the microarchitecture level, the hard part is not raw CISC-ness but irregularity and compatibility baggage. In that respect x86 was usually the uglier customer.
High-end x86 implementations ultimately scaled further because Motorola had less market pressure and fewer resources than Intel to keep throwing silicon at the problem, not because m68k was somehow harder to optimise.
Later high-performance m68k cores did what later x86 cores also did: translate the architected variable-length instruction stream into a more regular internal form. Motorola’s own MC68060 manual says the variable-length M68000 instruction stream is internally decoded into a fixed-length representation and then dispatched to dual pipelined RISC execution engines. That is not evidence of an ISA that was uniquely resistant to microarchitectural optimisation. It is evidence that Motorola used the same broad trick that became standard elsewhere: hide ISA ugliness behind a cleaner internal machine.
There is also a deeper point. The m68k ISA was rich, but it was comparatively regular and systematic at the architectural level. The m68k manuals show a clean register model and – notably – consistent condition-code behaviour across instruction forms. That kind of regularity is exactly what tends to help both compiler backends and hardware decode/control design. By contrast, x86’s biggest hardware pain historically came not from being «less CISC» than m68k, but from being more irregular and more burdened by backward compatibility.
Lastly, but not least importantly, CPU's were not the core business of Motorola – it was a large communications-and-semiconductors company, with the CPU's being just one product family within a much larger semiconductor business.
There was no clear understanding within the company of the rising importance of CPU's (and computing in general), hence the chronic underinvestment in the CPU product line – m68k did not see the light of highly advanced, performant designs purely because of that.
Well, here I am following what people who worked on CPUs at the time wrote.
And from the point of microcoded system like x86 and 680xx were (including 68060) it is important to how many microinstructions your instruction stream will decode - something that greatly favours ISAs that are not orthogonal - and major reason why x86 often has 1.2-1.6 ratio of microinstruction to instruction for overall program code.
Orthogonality makes it problematic because while it's easy in "interpreter microprogram" style of old and easy to program in assembly for, it means that for example for 68k you have to deal with many addressing modes for every operand - whereas x86 pretty much fuses it between 1 to 2 instructions because only one operand can have any computed address, and scope of available computation is limited (even compared to just 68000).
This means that while both architectures can use "translation microcode" approach, one (x86) will easily decode into one or two 72bit instructions (using P6 here) with worst case involving somewhat rare 3 operand form of memory address (which still can only happen for one operand of the instruction, not both)
It used to be the case with intel macs and their atrocious confluence of cooling system, thermals, and power supply system (the CPU actually was not really to blame).
But when RAPL and similar tools to throttle CPU are used, the CPU time gets reported as kernel_task - on linux it would show similarly as one of the kernel threads.
The process described is literally an attempt at canceling benefits in "frog boiling" method. If Tories went straight to canceling benefits, they would end up in trouble, by making worst possible process they could put it in terms of "verifying eligibility and that benefit funds are not scammed out".
Similar approaches are utilized in other areas of british government, unfortunately.
We seem to have the same in Poland, given that I occasionally hear about people being upset they have to re-prove to ZUS or NFZ they have a missing limb or other such (still) permanent disability.
> Similar approaches are utilized in other areas of british government, unfortunately.
Look, I'm as cynical as they get, but where you see an attempt at "boiling the frog", and the author sees a personal insult, I see a simple security/anti-fraud measure. They're doing it for the same reason your bank logs you out after some 20+ minutes even if you're active, why most sites have you reauthenticate withing days to months, why your certificates expire after a year, or why you may rotate your access/API keys frequently. This prevents an erroneous (or fraudlent) state from living forever.
I'm usually the security hater around these parts, never thought I'd have to defend these ideas. But the truth is: you don't have a process like this in place, you'll eventually discover double-digit percent of the money is going towards people with fake disabilities, registered once 40 years ago in bumfuck nowhere, and never verified since.
FWIW, I do agree such process can be weaponized against people in general or some groups specifically, and I'm also not saying the process described in TFA is perfect. I am saying that the process was not a problem in this story. This is a story with a protagonist that's an asshole who abuses technology to hurt innocent people.
There's a middle ground between once, 40 years ago, never again, and once every year.
There's a show The Last Leg that talks about the UK procedures along with many other disability issues and hurdles.
Missing limbs don't grow back, Cerebral Palsy is incurable, and so on ... yet year in year out people are facing disruptions and re-checks for the 15th or 16th time now they're 28 .. and on it goes.
How much waste and needless aggro comes about from annual checks, how hard is it to follow through once every five years or even ten on conditions that have been confirmed three or four times already.
> you'll eventually discover double-digit percent of the money is going towards people with fake disabilities,
Yeah ... along with double digit election fraud and other fervid fantasies.
The reason I'm seeing "boiling the frog" is that tories did bunch of reforms, and were literally caught red handed with stuff like ordering a software vendor ensure the software would mistakenly bar people from getting services they are entitled for.
Our ZUS is broken because someone let basic anti-fraud checks go out of hand without review, Tories got caught literally redesigning entire benefit system with goal of slowly dismantling it (curiously after their redistricting of NHS the system suffered massive loss in efficiency and capability)
Around the time of K8 being released, I remember reading official intel roadmaps announced to normal people, and they essentially planned that for at least few more years if not more they will segment into increasingly consumer-only 32bit and IA-64 on the higher end
They were trying to compete with Sun and IBM in the server space (SPARC and Power) and thought that they needed a totally pro architecture (which Itanium was). The baggage of 32-bit x86 would have just slowed it all down. However having an x86-64 would have confused customers in the middle.
Think back then it was all about massive databases - that was where the big money was and x86 wasn't really setup for the top end load patterns of databases (or OLAP data lakes).
In the end, Intel did cannibalize themselves. It wasn’t too long after the Itanium launch that Intel was publicly presenting a roadmap that had Xeons as the appealing mass-market server product.
Yeah they actually survived quite well. Who knows how much they put into Itanium but in the end they did pull the plug and Xeons dominated the market for years.
They even had a chance with mobile chips using ATOM but ARM was too compelling and I think Apple was sick of the Intel dependency so when there was an opportunity in the mobile space to not be so deeply tied to Intel they took it.
I think the difference was that replacing Itaniums with Xeons on the roadmap didn't seriously hurt margins (probably helped!)
The problem with mobile was that it fundamentally required low-margin products, and Intel never (or way too late) realized that was a kind of business they should want to be in.
> and thought that they needed a totally pro architecture (which Itanium was).
Was it though ? They made a new CPU from scratch, promissing to replace Alpha, PA-RISC and MIPS, but the first release was a flop.
The only "win" of Itanium that I see, is that it eliminated some competitors in low and medium end server market: MIPS and PA-RISC, with SPARC being on life support.
The deep and close relationship of Compaq with Intel meant that it also killed off Alpha, which unlike MIPS and PA-RISC wasn't going out by itself (Itanium was explicitly to be PA-RISC replacement, in fact it started as one, while SGI had issues with MIPS. SPARC was reeling from the radioactive cache scandal at the time but wasn't in as bad condition as MIPS, AFAIK)
I never used them but my understanding is that the performance was solid - but in a market with incumbents you don't just need to be as good as them you need to be significantly better or significantly cheaper. My sense was that it met expectations but that it wasn't enough for people to switch over.
Merced (first generation Itanium) had hilariously bad performance, and its built in "x86 support" was even slower.
HP-designed later cores were much faster and omitted x86 hardware support replacing it with software emulation if needed, but ultimately IA-64 rarely ever ran with good performance as far as I know.
Pretty sure it was Itanium that finally turned "Sufficiently Smart Compiler" into curse phrase as it is understood today, and definitely popularized it.
Part of the effort to ditch x86 was to destroy competition that existed due to second sourcing agreements. After already trying and losing in court the case to prevent AMD and others from making compatible chips, Intel hoped to push IA-64 for the lucrative high performance markets it dominated in PCs, and prevent rise of compatible designs from other vendors.
> It’s as if they actually bought into the RISC FUD from the 1990’s that x86 was unscalable, exactly when it was taking its biggest leaps.
That's exactly what was happening.
Though it helps to realise that this argument was taking place inside Intel around 1997. The Pentium II is only just hitting the market, it wasn't exactly obvious that x86 was right in the middle making its biggest leaps.
RISC was absolutely dominating the server/workstation space, this was slightly before the rise of the cheap x86 server. Intel management was desperate to break into the server/workstation space, and they knew they needed a high end RISC cpu. It was kind of general knowledge in the computer space at the time that RISC was the future.
Exactly! But this was not just obvious in retrospect, it was what Intel was saying to the market (& OEMs) at the time!
The only way I can rationalize it is that Intel just "missed" that servers hooked up to networks running integer-heavy, branchy workloads were going to become a big deal. OK, few predicted the explosive growth of the WWW, but look around at the growth of workgroup computing in the early 1990's and this should have been obvious?
I'm not sure thats a fair description of server workloads. I'm also not sure it's fair to say Itanium was bad at integer-heavy, branchy workloads (at least not compared to Netburst)
The issue is more that server workloads are very memory bound, and it turns out the large OoO windows do an exceptional job of hiding memory latency. I'm sure the teams actually building OoO processors knew this, but maybe it wasn't obvious outside them.
Besides, Itanium was also designed to hide memory latency with its very flexible memory prefetch systems.
The main difference between the two approaches is static scheduling vs dynamic scheduling.
Itanium was the ultimate expression of the static scheduling approach. It required that mythical "smart enough compiler" to statically insert the correct prefetch instructions at the most optimal places. They had to strike a balance simultaneously wasting resources issuing unneeded prefetches and unable to issue enough prefetches because they were hidden behind branches.
While the OoO x86 cores had extra runtime scheduling overhead, but could dynamically issue the loads when they were needed. An OoO core can see branches behind multiple speculative branches (dozens of speculative branches on modern cores). And a lot of people miss the fact than an OoO core can actually take the branch miss-predict penalty (multiple times) that are blocked behind a slow memory instruction that's going all the way to main memory. Sometimes the branch mispredict cycles are entirely hidden.
In the 90s, static scheduling vs dynamic scheduling was very much an open question. It was not obvious just how much it would fall flat on its face (at least for high end CPUs).
Well, TBH it wasn't all FUD - hanging on to x86 eventually (much later) came back to bite them when x86 CPUs weren't competitive for tablets and smartphones, leading to Apple developing their own ARM-based RISC CPUs (which run circles around the previous x86 CPUs) and dumping Intel altogether.
It is interesting how so much of the speculation in those days was about how x86 was a dead end because it couldn’t scale up, but the real issue ended up being that it didn’t scale down.
Well, it turns out that it could scale up, it just needed more power than other architectures. As long as it was only servers and desktop PCs, you only noticed it in more elaborate cooling and maybe on your power bill, and even with laptops, x86 compatibility was more important than the higher power usage for a long time. It's just when high-performance CPUs started to be put in devices with really limited power budgets that x86 started looking really bad...
Interesting, apparently it did scoreboarding like the CDC6600 and allowed multiple memory loads in flight, but I can't find a definite statement on whether it did renaming (I.e. writes to the same registers stalled). It might not be OoO as per modern definition, but is also not a fully on-order design.
Most of BeOS IPC is in mainline Linux kernel [1] - the difference here seems to be implementing some of the services that are supposed to be available related to filesystem etc and the user land side of it (raw IPC does very little without another layer on top)
[1] - there's a reason why a bunch of BeBook reads the same as some of the oldest parts of Android documentation
I am distrubting an svg file. It’s a program that, when run, produces an image of mickey mouse.
By your description of the law, this svg file is not infringing on disney’s copyright - since it’s a program that when run creates an infringing document (the rasterized pixels of mickey mouse) but it is not an infringing document itself.
I really don’t think my “i wrote a program in the svg language” defense would hold up in court. But i wonder how many levels of abstraction before it’s legal? Like if i write the mickey-mouse-generator in python does that make it legal? If it generates a variety of randomized images of mickey mouse, is that legal? If it uses statistical anaylsis of many drawings of mickey to generate an average mickey mouse, is that legal? Does it have to generate different characters if asked before it is legal? Can that be an if statement or does it have to use statistical calculations to decide what character i want?
The SVG file is a representation of mickey mouse thus possibly touches Disney copyright (depends on exactly what form of Mickey it represents, as I believe some went public domain equivalent recently). It's not capable of being something else without substantial rework. Therefore it is a derivative work.
Generally, to pass the test of not being a derivative work it would need to be generic enough that it creates non-copyrighted works as well, then the responsibility shifts over. Can the program exist without a given copyrighted work (not general idea, specific copyrighted works)? Then it's quite probably not derivative.
Anything Microsoft lacking V6 is configuration issue - ever since Vista, Windows networking (in corporate) treats v4-only as somewhat "degraded" configuration (some time ago there was even a funny news post about how Microsoft was forced to keep guest WiFi with enabled v4, having switched everything else to V6 only)
This is core to plan9's "everything is a filesystem", a generalisation of Unix "everything is a file" and surprisingly a direct analog of Sun Spring OS RPC+Namespace model
[1] The reason being that she was found in Tennessee while being searched for a crime in another state, thus allowing them to treat it as interstate fugitive from a crime scene
reply