JoeyJoeJoeJr
Google is certainly guilty of killing off lots of products, but:
The video demonstrates the ecosystem working now, using features that have existed for years, most of which work across hardware platforms from multiple vendors, as well as multiple operating systems (i.e. features that won’t disappear on Google’s whim, because they don’t actually control the tech, they leverage open standards, etc).
Let’s also not pretend like Apple has never killed a product, service, or feature. Ecosystems grow, shrink, and change all the time. If you prefer one offering over the other, use it. That’s the entire point of the video.
I think this conflates “ecosystem” with “closed ecosystem” or “walled garden.”
I agree that closed ecosystems are frustrating lock-in tactics. But open ecosystems exist - KDE connect actually shows a good example. It was built for the KDE ecosystem (desktop environment, apps, and services that integrate and work well with each other), but makes the protocol open, so clients can exist for Gnome, and other platforms.
I recognize this is mostly semantics, but wanted to call it out because I think the integration and interoperability afforded by an “ecosystem” is extremely user friendly in general. It only becomes a problem when it is weaponized to lock you in.
If your computer is compromised to the point someone can read the key, read words 2-5 again.
This is FUD. Even if Signal encrypted the local data, at the point someone can run a process on your system, there’s nothing to stop the attacker from adding a modified version of the Signal app, updating your path, shortcuts, etc to point to the malicious version, and waiting for you to supply the pin/password. They can siphon the data off then.
Anyone with actual need for concern should probably only be using their phone anyway, because it cuts your attack surface by half (more than half if you have multiple computers), and you can expect to be in possession/control of your phone at all times, vs a computer that is often left unattended.
My first thought was similar - there might be some hardware acceleration happening for the jpgs that isn’t for the other formats, resulting in a CPU bottleneck. A modern harddrive over USB3.0 should be capable of hundreds of megabits to several gigabits per second. It seems unlikely that’s your bottleneck (though you can feel free to share stats and correct the assumption if this is incorrect - if your pngs are in the 40 megabyte range, your 3.5 per second would be pretty taxing).
If you are seeing only 1 CPU core at 100%, perhaps you could split the video clip, and process multiple clips in parallel?
If your drive is the bottleneck, this will make things worse. If you want to proceed:
You’re already using ffmpeg to get the sequence of frames, correct? You can add the -ss
and -t
flags to give a start time and a duration. Generate a list of offsets by dividing the length of video by the number of processes you want, and feed them through gnu parallel to your ffmpeg command.