Hacker Newsnew | past | comments | ask | show | jobs | submit | Rillen's commentslogin

Thats crazy; Are you using it as your daily primarly work horse?

I can't imagine how you got used to the performance you have right now. Either you are not doing anything with your device or you would hugely benefit of upgrading at least every 3 years or so.


Working from home on a friday is probably the biggest reason for a lot of people i guess.

Meetings, is another one.

Doing presentations.

Also for me at least: On-Call.

I would prefer a workstation at home and a lightweight laptop with company network access but hey my companie prefers to buy me a Laptop for 3k over a desktop for 1k and a light laptop for 1k.

What can i do? yeah nothing :)


Its such a hard comparision though.

You are running the newest of the newest.

My Lenovo Thinkpad X390 which i'm sitting on right now for a while, still has ~8h left to go.

Chrome with 30 tabs, intellij etc. open and running.

I do assume the M1 is doing a great job, don't get me wrong, but i would like to see a more objective comparison.

Personally speaking, the most interesting thing for me to see is that Apple now put a very strong SoC on all its models. From Air to Pro, with a relative good price point (as long as you don't upgrade anything...)


Huh? Is there anything to support your assumption?


The speed of light is a huge limiting factor on modern processors.

https://arstechnica.com/science/2014/08/are-processors-pushi...

Light travels about a foot in 1 nanosecond so moving RAM from a foot away to touching the CPU saves 2 ns in round-trip latency. That's a big win on modern systems.

In reality it's a bigger effect since electric signals travel slower than light. And the effect is yet bigger because off-package memory must be accessed through buses with several levels of synchronization and buffer and gate delays.

Apple gets away with "unified memory" in the M1 because having the memory on-package means a good deal of the bus and sync and contention logic becomes unnecessary. So everything that touches RAM gets a lot faster. And almost everything touches RAM.

https://www.theregister.com/2020/11/19/apple_m1_high_bandwid...

https://www.macworld.com/article/3597569/m1-macs-memory-isnt...


Plenty move advantages then just round trip latency. The fact that its a single SOC, what in turn reduces traces on the MB. Freeing up real-estate.

We are finally moving to the conversion point between smartphone's and PCs. It has been in the making for a while.

Now, if that is a good thing for the consumers ( losing upgradability ) is another thing? But given how PC's have become less and less flexible anyway.

We used to have slots for everything ( GPU, Network, Audio, ...). Now we have ATX boards where you maximally put in a GPU. SATA is going the way of the dodo with m.2 becoming the new standard for anything data. I mean, what do we even change anymore on PC's... CPU, Memory, GPU.

GPU's will become a external device with USB4. Mass storage has been moving to NAS more and more or USB3 devices. The people that need 4+ HDD's are more exceptions. And you can get away with USB3 HUB's + external enclosures.

I questioned for a long time, why we still have Chipset, that artificially segment MB's when the difference has become very small between them anyway. You can easily move that last bit of IO into the CPU SOC.

The days that we buy AMD or Intel SOC's with some default CPU+RAM+IO is probably closer then what most think.

Separate hardware is probably going to become a Server / Workstation Pro only feature ( with big $$$ prices ).

The reality is that hardware has reached a point, that most people did not even upgrade for years anymore. And its more a smaller group / minority that really needs ultra fast hardware.

Flexibility is moved from big MB's to external devices, connected over high speed connections.

Sorry if i have gone a bit off topic but when you mentioned the onboard memory, it got me thinking about how we really are moving to a SOC/NUC/... future for even powerful hardware.


It is not an excuse for the 1% but it still means that today way more people are better of.


People are better off partly because of the 1%. The system that allows some too gain in excess also allows all to flourish.


I live in a former communist country... holy fuck is that hard to explain to some people. Back then, everybody was poor, and some party members were less poor. Now, everybody has a lot more compared to then, but because someone else has even more than that (huge house, a bmw,...), they blame capitalism. They literally stood in line to buy cabbages, and now they complain that strawberries are expensive (in december!).

Only thing worse ae the pensioners, who complain that there's "too much choice" in the stores... how they used to have one chocolate (which didn't have enough cocoa to actually be a chocolate), and now theres 3/4 of an aisle of just chocolates... the chocolate they used to buy is still there, but it's shitty compared to other stuff, then they buy a milka and complain that Lindt is too expensive...


there's just way more people too


And they are better off. Good point! :)


You have to work less today for the same things then in the 70s.

We have great shiny things like the internet; Video, Computer games, a lot more music, everyone can vote, smoking indoors is not normal etc.

We are living in a different world. Its not feasible to compare it to a time 50 years ago.

I prefer this time honestly.


> You have to work less today for the same things then in the 70s.

Well except for housing, medical costs, and college. So all the important things.


It is a camera for taking pictures.

It is stated on their website.

The main market for this camera is probalby not 8k video.

Its also hard to determine if this is a real issue for canon or if its just an issue in a circle of people who are more vocal about it.


> Its also hard to determine if this is a real issue for canon or if its just an issue in a circle of people who are more vocal about it.

he sure went through a lot of trouble if it was not a real problem (maybe he works for Nikon)


He did it because it gave him, as a YouTuber, a great video camera for a very good price.

It is a real problem. The questions is for whom? For the majority of canon R5 buyers? I doubt that.


And even for a youtuber 4k 60fps is more than enough. How many people are watching youtube videos in 8k 60fps. How many people even have the internet speed for that let alone the monitor.


Not many but as Matt explains in his video, you can downsample the 8K footage to 4K to get higher detailed 4K. The same is often done with 4K to 1080p footage for the same reason.


Its quite clear that canon engineeres or managers are aware how to build it like this.

I haven't even started watching this video and had the same thought of transporting the heat through the bottom.

Whatever made canon made it like this, was on purpose.

Either the concentrated on picture for the most time and realized to late and it would have required rebuilding core parts and not getting it out in time or they will bring out a camera/pro version to this with unlimited 8k recording and with a higher price.

The assumption that a hobby engineer can solve this issue while canon can't, is not realistic.

Based alone on the original firmware approach, i would argue that time or priority was the issue.


Yeah, no way they overlooked something like that. I had my Sony α6100 overheat within 20 minutes on mere 1080p video when the ambient temperature was at least 45°C. When these things run hot, it’s easy to feel where the hottest areas of the body are, and attaching a heatsink is obvious, and at that time, dumping heat to the tripod mount was the first thing that occurred to me when I contemplated how I might do it.


The flip side of this is that Sony (finally) made an APS-C mirrorless, in the α6600, which has a recording time that's only limited by power or storage.

So it was never impossible, and it isn't just a Nerf to segment customers into the cinema bracket. It's just an area where they're willing to compromise.

Without IBIS, the αN < 6500s aren't great for many video applications, they're photo cameras which can also take some video. I suspect that Sony just didn't realize the degree to which the YouTube generation would be using their entry-level mirrorless systems for extensive video. Consider that it took them a generation to make flip screens standard, and that they bumped the battery size and ability to record indefinitely on (only) the model which has IBIS: that's a nudge, saying "If you want to record a lot of video, the α6600 is the model you want".


It was the α6000, α6300 and α6500 that were time-limited, because they generated more heat in their video encoding. The newer α6100, α6400 and α6600 don’t run so hot, using a newer and better chip for the encoding. You’re wrong about the ability to record indefinitely: this is not specific to the α6600; all three of the newer generation get it. I’m recording 100 minute videos weekly on my α6100 with no sweat now (plugged in by USB, otherwise the battery will be down to 5–10% by this time—with the camera fixed in place for these recordings I also don’t care about image stabilisation, whether in-lens or in-body), it was only having trouble in the middle of summer when the ambient temperature was at least 45°C.


Alright, good to know.

I think that's more evidence that the overheating problem wasn't a deliberate attempt to segment customers into the cinema bracket, but rather something Sony didn't realize would be such a problem, because who is going to want to film 100 minutes on their photo camera? Practically everyone as it turns out.

Yeah, they could have and should have added better heat dissipation: but they can and should make (much) better software as well, and don't, because... it's Sony.


It doesn't have to be the engineers who overlooked it, I am sure there were a lot of great cooling solutions thought of once the overheating issues were discovered.

However, the management did overlook it. Now the machine overheats and requires hours of cooling down to shoot continuously for more than 20-25 minutes. Is that an acceptable amount considering any kind of film work?


Its not build as a film camera and its not a film camera. Its a picture camera.

The market segmentation is totally different for that.

I would still buy it. I have a Canon 80d and 90d and i rarely do any video recording at all. why? Because just a recording ready body doesn't give you good video. You need audio equipment etc. and 8k video is huge like wtf huge.


Making pods mutable would break the core benefit of what kubernetes imperial system does for you.

This GitOps 'nonsense' gives me a well defined and automatically backuped infrastructure setup with audit build in. It doesn't allow someone to snowflake around which is brilliant and forces you and your colleges to manifest stuff and not forgetting it and degrading your system over time (nix and nixos are also great examples of such systems)

This reminds me of the time i learned html and wanted to set new lines all over a text instead of using proper paragraphs and letting html take care of the proper formatting.

I would like to see a better/stronger statefulset though. As long as this pod is alive, make sure its state is not interrupted. Like allow a pod to be migrated to another node.

Nonetheless, i'm in the middle of setting up kubernetes with kubeadm and cilium network. Its already really easy to do so. It will just get more easy and more stable over time and its already great.

When you look at the storage example: yes its more difficult then just using a hard drive. But you ignore the issue with one hard drive: Backup, checksum / bit rot and recovery. With a storage layer, you can actually increase replica count, you can backup ALL storage volumnes automatically.

the same with networking: With cilium you can now have a lightweight firewall with dns support.

It is much more critical for the whole industry to start rebuilding software to be more cloud/container native. This will reduce the pain points we have right now and will make it more resilient to operate. For example Jenkins: Instead of one big master, have a ha setup for your working queue, a pod for a dashboard and schedule workers on demand.

My personal conclusion: Don't use it, if you don't need it. If you need it, embrace the advantages.


> Making pods mutable would break the core benefit of what kubernetes imperial system does for you.

Which core benefit is that? I’m not following.

> This GitOps 'nonsense' gives me a well defined and automatically backuped infrastructure setup with audit build in. It doesn't allow someone to snowflake around which is brilliant and forces you and your colleges to manifest stuff and not forgetting it and degrading your system over time (nix and nixos are also great examples of such systems)

TFA says you can still use the GitOps “nonsense” if you want under his proposal.


Can't you use node affinity to stop k8s from moving a pod to a different node whilst alive?


Thats not the problem this would solve. As long as the node runs and the pod itself runs and there is no issue with a pod with a higher priority, k8s will not throw it from that node.

But imagine a database as a pod with 60gig of ram and a ha setup. Now you need to update your node, what does k8s? It will throw it out and creates a new one which needs to recover or read all the logs to fill up 60gig of ram again from nothing. Instead it could migrate this pod to another node and keep the downtime to a minimum.

Or a jenkins master, it has to shutdown on node 1, recreate to node 2 which takes time and then your agents need to be able to recover from it.

You have to be able to roll through your whole k8s infrastructure to update every node on a regular basis; Alone for security reasons.


Sooner than later kubernetes will support live migration of workloads via checkpoint-restore of processes, like xen, and many other software already has.

https://en.m.wikipedia.org/wiki/CRIU

EDIT: https://github.com/kubernetes/kubernetes/issues/3949


I don't think what you are doing is good. I think its contraproductive.

This narrative of 'is a doctor' doesn't make that person an expert.

In a global pandemic like this, i think, you need an expert group of people with a min and max size. Those people need to be in the right position to have access the most amount of data and they need to make suggestions and predictions as accurate as possible. This then needs to be taken by the current government to decide what to do.

Your type of 'doctors' shouldn't comment on it like they are experts; They should start doing research with proper tools and statistics to support the expert group at the top.

It is easy to say something doesn't work after a few days or weeks. Its just not helping. If experts are trying this, the chance, that your opinion was a fluke and now matches your expected outcome, doesn't make you an expert.

All those shitty people thinking wearing a mask is a absolut no go and strips them of their rights are a real problem.

In germany on the 'querdenker' people brought up and compared themselfs with anne frank and sophie scholl because of this. She was 22 when she was killed by the nazis through a guillotine in munich as a student because of distributing anti-war flyers.

Seeing this is one of my most frustrating experiences with our society so far. While there are a lot of people following those simple requests, its astounding how irrational people are in the usa.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: