This superior (sic) is what a negative productivity employee looks like.
It might also be a communication issue if the team doesn't challenge the status quo, because this situation is terrible:
> For any ‘exception granted’ we would have to book time with them days in advance then white-board the reason why Sonar is wrong
(no evidence for that, maybe they did challenge it and the superior refused, but I just wanted to mention it)
The only workaround I've found is to create a new function, fill it full of many useless no-op lines, and write a test for that function, just to bump the percentages back up. This is often harder than it sounds, because the linter will block many types of useless no-op code. We then remove the code as part of another ticket.
It is trading off useless aesthetics over practicality.
And Sonar is far from being alone in this. JIRA is the most glaring example I can think of. Growing companies implement cargo-culted tools without understanding the needs and requirements, and let themselves drift into templates or "best practices" that are not relevant or beneficial to their own operations as-is, resulting in a sum of frustrations, whose impact on the work and the teams they acknowledge only way too late.
The care you need to inject not only in your tools, but how they are apprehended by both your customers and their primary users (which may have very different, if not opposed, perspectives on how/why to use it), from pricing, to documentation, to use-cases...
This is especially very complex when your tool answers to a regulation requirement, because it's very often received as a constraining/oppressing "solution", rather than an enabling one: it may be confortable to you as a seller, and confortable to your customer, but it may also be a counter-sale point to your (customer's) users that will impact future consideration when they become purchasing agents themselves.
Some tools bias people into doing bad things. It's not exactly the tools fault, and they may even have good uses (like Bash linters), but tools guide people and it's good to remind people not to follow.
The process of reality getting abstracted to the point abstractions stop being representative of reality is extremely common in software engineering.
https://en.wikipedia.org/wiki/Hyperreality
A hyperreal workplace is one where representations of reality take precedence over reality itself, and the reality of a situation ceases to have meaning. i.e. One where people chase metrics for the sake of metrics, instead of understanding that issue count is supposed to reflect the underlying code quality, and code quality should always take priority over the representation.
The broader philosophical context is relevant because it shows the broader cultural problem instead of assuming the issue is limited to a single tool.
Because, rather than recognize that overdoing abstractions is a thing and reminding people that there is always reality out there that won't bend to your wishes, post-modernism (which is the term the GP used) tells people that there is no reality out there, it's all just human-created abstractions, and anyone who tries to push back because reality is just insufficiently post-modernist.
The only point on which all post-modernists agree is a refutation of meta-narratives (and to be explicit: "here is no reality out there, it's all just human-created abstractions" is a meta-narrative).
A very, very charitable interpretation is that you are maybe conflating it with Frankfurt School of critical theory, because it is also somewhat used/based on psychoanalysis (sigh) (latest postmodernists use psychoanalysis way less, also post-modernism isn't built on it, contrary to critical theory). Postmodernism is mostly post-marxism though, while critical theory is mostly neo-marxist imho (also, i don't want to be too critical of Frankfurt's school, i think most of their bad rep is caused by bad vulgarization/pop-science, most critics i read don't seems to understand why it's wrong either).
I do think that the reason most people conflate the two is because of an idiotic canadian psychoanalyst who can't read (or at least, can't understand what he read), who _clearly_ has no degree in literature or philosophy, and try to appear smarter than he is. He invent citations of the books, and sometime state that Derrida mean something when Derrida hismself wrote the opposite. 8th grader would do better and their reading comprehension assignment. He is wrong. Read and think by yourself.
His pattern for giving a fuck was predictable. My strategy was to hold certain tickets in an undead state (not impacting the metric), then reopen them and close them, demonstrating a metric improvement.
He got his improvement and big shot street cred, users weren’t impacted, and I didn’t have to ruin support to try to grind out small gains.
Just completely impenetrably baffling to me in a way that other fields like chemistry or microbiology or physics or whatever (despite also not being my own) aren't. Not that I understand them, but they're penetrable, I can read more and more and form some kind of understanding.
Is it just me? I don't know what it is, can it really be as simple as philosophy not being taught at school (compulsorily, or young) so I don't have that kind of rough overview of the landscape I do for other broad subjects? (I did take one course in 'contemporary philosophy' at university, which I enjoyed, but we covered only what we covered I suppose - I might be able to hold a (very) basic conversation about Sartre or Wittgenstein, but that page on post-structuralism.. no idea!)
Of course there are just some that are less accessible than others, due to writing style or size of their philosophical project (Hegel is an example of both of those qualities). A lot of French philosophy since the second world war, Baudrillard being no exception, is generally characterized as such as well amongst the anglo audience, although I don't think that this is entirely fair.
I'd say the best thing you can do is never attempt to understand it through Wikipedia, but pick up a full book instead and read it a second time if the argument doesn't make sense the first time. Of course there are some authors I would avoid as a beginner, but someone like Kant is fine for even your first philosopher, and is amongst the biggest names in modern philosophy. Prolegomena and Critique of Pure Reason are two books of his about the same thing written in two opposite ways, the former from easy to difficult, the latter vice versa, I always recommend those.
Sartre and Wittgenstein are both somewhat odd for a contemporary philosophy course. I'm curious why they chose that arrangement. Nevertheless, being able to hold a conversation about either of them is already quite solid, plus you get three philosophers for the price of two! :)
I liked the Barthes example on the post-structuralist page - if a text's author doesn't necessarily have the authority to assert the meaning of a text, then the idea that they text is necessarily part of some identifiable structure is open to question. I assume that means that the same text might fit into multiple contexts with different meanings and trying to fit it with one static meaning based on its initial context is doomed, and that suggests structuralist critique is either insufficient or overly reductive.
[0] Although arguably all of philosophy is people misunderstanding each other; otherwise it may as well be a settled field.
Hard sciences are shape rotation.
One is basically stochastic parottism, and the other is dealing with reality.
Philosophy has no end-game or practical applications. You can make anything up, and so long as enough souls latch onto it via pattern recognition, you have achieved memetic reproduction.
With hard sciences, you can talk all you want, but if your hypotheses are consistently disproven, only the untrained and deranged will latch onto your ideas.
There is nothing to penetrate in philosophy. It's not a reflection of reality, but a reflection of the people it captivates.
It's very little different than music, or any other sort of entertainment. Dare I call it an art. In that case, I would say its recent interpretations are lacking.
A personal aside: much of this era's approach to philosophy reminds me of Fabianism -- wretched, cowardly, and completely superfluous to living an integrated life.
I am not saying it's snake oil, but honestly how i ve seen ut being used, it's not that far
I've found the majority of its suggestions helpful, and the ones that are not I simply ignore.
That's the real time sink - figuring out how to get past it. It's a lot more than 2 minutes, sometimes even days if it's something you can't work around and have to go through the red tape if your team isn't empowered to take charge of your own pipelines.
# noqa: F401// Nosonar
- We support hold-the-line: we only lint on diffs so you can refactor as you go. Gradual adoption. - Use existing configs: use standard OSS tools you know. Trunk Check runs them with standardized rules and output format. - Better config management: define config within each repo and still let you do shared configs across the org by defining your own plugin repos. - Better ignores: You can define line and project level ignores in the repo - Still have nightly reporting: We do let you run nightly on all changes and report them to track code base health and catch high-risk vulnerabilities and issues. There's a web app to view everything.
Try it and let me know how it goes. https://docs.trunk.io/check/usage
Initially, people always come out of the woodwork insisting that the gate requirements must be hard blockers and that we can just hand wave away the issues OP listed by tweaking the project rules. I always fight them, insisting that teams should be the owners and to gain quick adoption it should just be considered as another tool for PR reviewers. Eventually, people back off and come to accept that Sonar can be really helpful, but at the end of the day the developers should be trusted to make the right call for the situation. It’s not like we aren’t still requiring code reviews. I feel for OP, but it’s not Sonar’s fault the tool is being used for evil instead of good.
This last time I implemented SonarCloud, I took an anonymous survey to get peoples opinion. For the most part people liked the feedback Sonar provided. More junior engineers and more senior engineers liked it the most- midlevel engineers not so much. The junior liked getting quick feedback prior to asking for code reviews. The more senior engineers - who spend a lot of their time doing PR reviews - liked that it handled more of the generic stuff so that they could focus more on the business logic or other aspects of the PR. It’s just another tool in the toolbox.
However, I saw it causing similar turd polishing behaviour: Sensible code needing to be changed because it exceeded some obstinate metric, any kind of code movement causing existing issues to appear as "new", false positives due to incomplete language feature support, etc.