Avatar

lysdexic

lysdexic@programming.dev
Joined
183 posts • 153 comments
Direct message

Ive never gotten to write rust professionally, but I have always kinda winder d if it was marketed wrong. My thought was always that it should be sold as “easy” though. Its easy to write code. It’s hard(er) to make mistakes.

I agree, but I don’t think the problem is marketing. The problem is how some elements of Rust’s community desperately try to upsell the language beyond the value it actually can provide, and once that fails they fall back to toxic behavior and basically just mindlessly shitting on anything that’s not Rust. It goes well beyond a cargo cult mentality, and it’s sad that a fine technology is dragged through the mud by those who were expected to show its value.

permalink
report
parent
reply

Where he gives plenty of examples of UB resulting in the compiler optimizing away safety and introducing security vulnerabilities silently.

That’s the bit that those who parrot on abot UB get entirely wrong, and yet cling to it if it was something meaningful.

Let’s make this absolutely clear: any code you write that triggers UB is a a bug you introduced. Your complains about UB boil down to blaming the language for bugs you created because you didn’t knew what you were doing.

As you can configure compilers and static code analysis tools to flag UB as warnings or even errors, the discussion of using UB in your code is a discussion on incompetence. Complaining that a programming language purposely leaves out the specification of the behavior that broken code should have because you don’t know what you’re doing is the definition of a bad workman blaming his tools.

If you paid attention to the article you’re quoting, you’d notice that even the author makes it quite clear that programs with UB only “appear to work”. That boils down to the definition of UB, and the reason why every single developer in the world who had any intro to C or C++ experience knows quite well that UB means broken code. Why is it hard for you to understand this?

permalink
report
parent
reply

It violates the principle of least surprise.

It really doesn’t. I recommend you get acquainted with what undefined behavior is, and how it’s handled by developers.

You don’t expect the compiler to delete your bounds checking etc.

By design, undefined behavior has a very specific purpose. Newbies are instructed to consider code that leads to undefined behavior as a bug they introduced. For decades compilers and static code analysis tools can detect and flag undefined behavior as errors in your code.

As I said before, sometimes it seems clueless developers parrot on about “undefined behavior” as some kind of gotcha although they clearly have no idea what they are talking about. Sometimes it sounds like they heard it somewhere and just mindlessly repeat it as if it meant something.

The way c and c++ define and use UB is like finding an error at compile time and instead of reporting it, the compiler decides to exploit it.

What are you talking about? Compilers can and do flag undefined behavior as errors. I recommend you read up on the documentation of any compiler.

Also, I don’t think you fully understand the subject. For example, as an example, some compiler implementations leverage UB to add failsafes to production code such as preventing programs from crashing when, say, null pointers are dereferenced. We can waste everyone’s time debating whether null pointers should be dereferenced, but what’s not up for discussion is that, given the choice, professional teams prefer that their code doesn’t crash in users’ machine if it stumbles upon one of these errors.

permalink
report
parent
reply

The most asinine thing i encountered is that the bracket operator on std::map writes 0 value if the key is not found.

That’s a “you’re using it wrong” problem. The operator[] is designed to "Returns a reference to the value that is mapped to a key equivalent to key, performing an insertion if such key does not already exist. "

The “0 value” just so happens to be the result you get from a default initializer whose default initialization corresponds to zero-initialization.

If you want to use a std::map to access the element associated with a key, you need to either use at and handle an exception if no such key exists, or use find.

permalink
report
parent
reply

Should focus on getting rid of undefined behavior.

What problem do you believe is presented by undefined behavior?

permalink
report
parent
reply

How do you succinctly call a language that has all behavior defined or equivalently no undefined behavior (aside from designated regions)?

I don’t understand this fixation with undefined behavior. Its origins are in the design decision of leaving the door open for implementations to employ whatever optimization techniques they see fit without the specification get in the way. This is hardly a problem.

In practical terms, developers are mindful to not rely on those traits because as far as specifications go they have unpredictable implications, but even so they are never a problem. I mean, even in C and C++ it’s trivial to tweak the compiler to flag undefined behavior as warnings/errors.

Sometimes it sounds like detractors just parrot undefined behavior as some kind of gotcha in ways I’m not even sure they fully understand.

What problem do you think that undefined behavior poses?

permalink
report
parent
reply

FTA:

A few months earlier, the engineering team at Amazon Prime Video posted a blog post explaining that, at least in the case of video monitoring, a monolithic architecture has produced superior performance than a microservices and serverless-led approach.

I recall reading a write-up of Amazon Prime’s much talked about migration away from server less and into monoliths.

The key take is that Amazon Prime’s problem is being wrongly pinned on microservices by the anti-microservices crowd. It was mainly an utter failure in analysis and architecture, where system designers failed to take into account basically the performance penalty of sending data over a network and followed a cargo cult mentality of expecting a cloud provider to magically scale out to buy back the throughput that their system design killed.

Of course when they sat down and actually thought things through, eliminating the need to shuffle data around over random networks ended up avoiding the penalty caused by sending data over a network.

The important thing is that they can pin the blame of a design failure in an architecture, and the anti-microservices crowd eats it up. Except that it says nothing about either microservices or even server less architectures.

permalink
report
reply

Having said this, I’d say that OFFSET+LIMIT should never be used, not because of performance concerns, but because it is fundamentally broken.

If you have rows being posted frequently into a table and you try to go through them with OFFSET+LIMIT pagination, the output from a pagination will not correspond to the table’s contents. Fo each row that is appended to the table, your next pagination will include a repeated element from the tail of the previous pagination request.

Things get even messier once you try to page back your history, as now both the tip and the tail of each page will be messed up.

Cursor+based navigation ensures these conflicts do not happen, and also has the nice trait of being easily cacheable.

permalink
report
parent
reply

For the article-impaired,

Using OFFSET+LIMIT for pagination forces a full table scan, which in large databases is expensive.

The alternative proposed is a cursor+based navigation, which is ID+LIMIT and requires ID to be an orderable type with monotonically increasing value.

permalink
report
reply

"Appendable” seems like a positive spin on the (…)

I don’t think your take makes sense. It’s a write-only data structure which supports incremental changes. By design it tracks state and versioning. You can squash it if you’d like but others might see value in it.

permalink
report
parent
reply