Avatar

lysdexic

lysdexic@programming.dev
Joined
183 posts • 153 comments
Direct message

Your format looks half baked and not thought all he way through. Take for instance the success bool. What info does this add that error_code and the request’s own status code doesn’t already send? And what’s the point of context if it is both unspecified and optional?

permalink
report
parent
reply

No> Context is whatever makes sense to provide to a consumer to help them debug it or respond to it

So it’s both optional and unspecified. This means it can’t be parsed or relied upon, specially by consumers. It’s useless.

the same basic idea as in the rfc under details.

No, it isn’t. Contrary to your ad-hoc format, RFC9457 specifies exactly the data type of detail and what’s its purpose. This allows third parties to reliably consume resources that comply with RFC9457 while your ad-hoc format leaves clients no option other than to ignore it.

IMO, it can’t easily be generalized. Some APIs may have context to provide, others may not.

It matters nothing what services can produce. What matters is whether clients can consume it. Your ad-hoc format fails to specify this field, which is optional, and thus leaves no option other than to ignore it. It’s unusable.

Success is something that you can sniff for after deserializing, as IIRC Fetch API will not throw except for a network errors, even in the event of a 4XX or 5XX.

What the Fetch API does or does not do is irrelevant. The responsibility of putting together a response and generating the resource shipped with it lies exclusicely in your service. If it outputs a resource that is unable to tell clients what went on, that’s a problem cause by both how your service is designed and the ad-hoc format it outputs.

The main take is that RFC9457 is well specified and covers basic usecases, while your ad-hoc format is broken by design. Thus when you describe the RFC as “overwrought”, you’re actually expressing the half-baked approach you took.

permalink
report
parent
reply

Here’s a way to convince a team to write unit tests:

  • setup a CICD pipeline,
  • add a unit test stage,
  • add code coverage calculation,
  • add a rule where unit tests fail if a code coverage metric drops.
  • if your project is modularized, add pipeline stages to build and test and track code coverage per module.

Now, don’t set the threshold to, say, 95 %. Keep it somewhat low. Also, be consistent but not a fundamentalist.

Also, make test coverage a part of your daily communication. Create refactoring tickets whose definition of done specifies code coverage gains. Always give a status report on the project’s code coverage, and publicly praise those who did work to increase code coverage.

permalink
report
reply

This could be a good opportunity to introduce the concept of test-driven development (TDD) without the necessity to “write tests first”. But I think it can help illustrate why having tests is better when you are expecting to make changes because of the safety they provide.

I doubt that by now the concept of TDD is unheard of to any professional team. Name-dropping concepts actually contributes to loose credibility of any code quality effort, and works against you.

Also, TDD’s credibility is already low as it piles on the requirement of spending unordinate amounts of extra work effort on aspects of a project which don’t deliver features, and thus it’s value-added is questionable from a project management perspective.

One aspect that does work is framing the need for tests as assurance that specific invariants are verified and preserved, and thus they contribute to prevent regressions and expected failure modes. From my experience it’s far easier to sell the need for specific tests if they are framed as “we need assurances that this component does not fail under conceivable usecases” and specially as “we were screwed by this bug and we need to be absolutely sure we don’t experience it ever again.”

permalink
report
parent
reply

That should be a disciplinary issue.

And that’s how you get teams to stop collaborating and turn your work environment to shit.

permalink
report
parent
reply

Focusing on code coverage (which doesn’t distinguish between more and less important parts of the code) seems like the opposite of your very good (IMO) recommendation in another comment to focus on specific high-value use-cases.

The usefulness of code coverage ratios is to drive the conversation on the need to track invariants and avoid regressions. I agree it’s very easy to interpret a metric as a target to optimize, but in this context coverage ratios is primarily used to raise the question on why wasn’t a unit test added.

It’s counterproductive to aim for ~100% but without this indicator any question or concern regarding missing tests will feel arbitrary. With coverage ratios being tracked, this topic becomes systematic and helps build up a team culture that is test-driven or at least test-aware.

Code coverage is an OK metric and I agree with tracking it, but I wouldn’t recommend making it a target. It might force developers to write tests, but it probably won’t convince them.

True. Coverage ratios are an indicator, and they should never be an optimizable target. Hence the need to keep minimum coverage ratios low, so that the team has flexibility to manage them. Also important, have CICD pipelines export the full coverage report to track which parts of the code are not covered.

The goal is to have meaningful tests and mitigate risks, and have a system in place to develop a test-minded culture and help the team be mindful of the need to track specific invariants. Tests need to mean something and deliver value, and maximizing ratios is not it.

permalink
report
parent
reply

Well, it’s really interesting that this is a hack that works, but you’re really fighting the compiler here.

I don’t think that’s a fair take. Keep in mind that the blog post covers a very specific corner case: you want return value optimization but you don’t want a movable type. If you wanted the type to be movable, you just had to went ahead with the happy path and define the move constructor.

What this blog post illustrates is a clever observation over a consequence (possibly unintended) of the way the C++ standard specified this behavior: for RTO to kick in, a necessary and sufficient condition is to have the move constructor declared. Declared but not defined. Oh.

This is making me all the happier I switched to Rust 😂

Except that Rust is not specified in an international standard, so this sort of things can either be downplayed as implementation bugs or implementation-specific quirks.

permalink
report
parent
reply

I am constantly losing track of which function is where and I’m never quite sure if I have code in the right place.

That’s not a Rust concern but a software architecture concern. The software architecture pattern you follow determines how the project is structured taking into account what requirements it needs to comply to address common requirements, such as ensuring that things that change a lot will be easier to change and things that don’t change won’t get in the way.

I recommend you read up on topics such as layered architecture, hexagonal architecture/ports and adapters, clean architecture, etc.

permalink
report
reply

True, though here the hack is incredibly unintuitive for the programmer.

I don’t think this trick is anything other than trivia. The happy path to enable RVO is to provide a move constructor. There is nothing unintuitive about that.

This trivia item just points out that the way the C++ standard is specified, the definition isn’t actually required. That’s hardly relevant.

Not to mention the compiler error that should catch this now only occurs at link time, and linking errors are even more cryptic to grok.

There is nothing peculiar about handling missing definitions. Linkers only flag those if a symbol is actually missing.

When they made RVO mandatory, they should’ve removed the constructor declaration requirement as well, instead of a half-ass solution like this.

I don’t think that your observation makes sense, or is even appropriate to the topic. RVO requires a movable type, and the rule of 5 is a well established aspect of C++. RVO does not change anything in that regard.

permalink
report
parent
reply

Smart pointers were introduced over a decade ago, and since their introduction there were already four updates to the C++ standard. There is really no excuse to shy away from smart pointers at this point. I really recommend you get up to speed on them.

permalink
report
parent
reply