For some refactoring I needed to find all HTMl tags not being self-closed. I decided to use regular expression for that, and this is what I came up with.
I agree that some stuff is easier when not squashing commits, but for the teams I've been working with I've felt that the pros of squashing outweigh the cons, but of course YMMV.
But I didn't know about git bisect skip
, thanks for the tip! But sincere question: What happens if there are e.g. three adjacent broken commits? If I have skip all three of those and the error was introduced in one of them, then git cannot tell me which commit introduced the error, right?
I swear, I didn't come up with that myself, I read that somewhere else, but of course I don't have a source anymore 🙈 Maybe some git developer is a huge fan of wordplays?
The value of a clean git history is often underestimated. I will explain one of the advantages based on the git bisect command.
URLPattern brings routing to the web platform | Web Platform | Chrome for Developers
An approach to standardizing common pattern matching use cases.
Mocking libraries come with disadvantages, but fortunately they can be replaced by in-memory implementations, at least for repositories.
With API descriptions rising in popularity, the main question I hear folks asking about is "API Design-first" or "code-first". This is a bit of a misleading question because these are not two unique things, there are a few variants. Code-First, Write Docs "When We Have Time" This is how I
With API descriptions rising in popularity, the main question I hear folks asking about is "API Design-first" or "code-first". This is a bit of a misleading question because these are not two unique things, there are a few variants. Code-First, Write Docs "When We Have Time" This is how I
GitHub uses MySQL to store vast amounts of relational data. This is the story of how we seamlessly upgraded our production fleet to MySQL 8.0.
jq is a nice JSON processor, which is helpful when working with JSON outputs, no matter if they are retrieved using curl or any other command.
Neovim comes with a very powerful command system, which can even be combined with existing shell commands!
It was not trivial to setup nginx with php-fpm to run in separate containers in kubernetes. Therefore I want to explain how I got it to work.
React has introduced hooks to replace classes. Some people are huge fans, while I am a bit more skeptical. An explanation.
Simple code often does not require complicated packaging mechanisms. Reusing established tools like Git and make seem perfect for this use case.
I think this is one of the most common misconceptions about DRY. Just because you have two times the same line in your code base it is not automatically a violation of DRY. If you compare if a number is bigger than 18 it is definitely not a good idea to extract that part if you are comparing the hour of the day once and the age the other time. In that case it would even be bad to create an abstration, and it would not be a violation of DRY. And I agree that something like this leads to code that is hard to maintain.
I have also seen well commented code, but in this article I concentrate on the bad ones. Are you saying you have never seen a bad code comment?
I think this is also in line with my article, since not being able to put yourself into somebody else's shoes (or even in yourself future's ones) is the reason for so much bad comments. But adding a comment to every single line cannot be the solution either, at least not in a higher programming language.
I agree with almost all of what you say, but the thing IME is that in most cases people learn to comment a lot, which results in comments that feel like they've been done just for the sake of it, which is one of the main problems IMO. It's not like "just add a comment, it won't hurt", since comments can be immensely misleading and literally take a lot of time until figuring out that the comment was wrong if you trust the wrong ones.
I also agree that this tends to be worse with bad code, which also is not surprising. Sometimes it feels to me like people think they can fix bad code with some comments, and I think that is far from being true.
I also admit that especially the title of the article might be a bit provocative, but giving the general positive sentiment of comments I think this is called for. Sometimes you have to exaggerate a bit to get some attention. I don't like click-baiting either, but unfortunately it works ;-)
I give you that, but I am not talking about assembly languages, therefore the examples from my blogpost aren't showing any :-)
Totally agree, that's why I also mentioned this in the article.
Very often good code that is self-explanatory does not need any comments at all and if it does, the comment should describe why it has been implemented this way instead of just repeating what the code already says.
Comments in code are quite often a code smell. Let’s see what is suboptimal about comments and talk about some strategies to avoid them.
I am not exactly sure what you mean by that... But the main advantage is that the command does not have to be executed manually everytime you change something. Instead entr
recognizes when something changes and re-executes the command for you.
Creating objects is a very basic task. Although this seems like a simple problem, it can be improved by using patterns like builder and factory.
Web push book provides all the information you need to learn about the web push API.
Building a graph is a pretty straight forward task in D3.js, but I’ve had a hard time understanding how to update them. This is a try to explain why.
Sure, web applications have different requirements and might warrant the use of more JavaScript than a website does. But one of the biggest problems nowadays IMO is that many developers choose these fancy technologies also for websites, just because they like them, without thinking too much about how that affects the user, and it does so in a mostly negative way. If you are building a website for the local bakery HTML and CSS backed by any CMS probably suffice, and there is no need to add the complexity of client-side JavaScript and SSR (or whatever) to it.
Already know that one, probably my favourite website on the world wide web ;-)
I think the point of the article (and I agree to that) is that "modern" websites (i.e. use heavy javascript frameworks) are having real issues that websites being built without loads of client-side javascript do not have. I guess some websites built in 2005 are performing better and are more accessible than websites being built today.
My experience is the same, the companies I've worked at had not only female designers, so I cannot really relate that part from the article. However, I think the rest is spot on, there is really a gap between design and coding, which causes websites to have fundamental issues.
Exploring the reasons why we no longer have web designers.
The practice of keeping all commits green can help create better software faster. Let’s explore why.
Demystifying Containers - Part I: Kernel Space
This series of blog posts and corresponding talks aims to provide you with a pragmatic view on containers from a historic perspective. Together we will discover modern cloud architectures layer by…
Click here to see how I turned ~170 lines of code in moder vanilla JavaScript in TodoMVC.
Click here to see how I turned \~170 lines of code in moder vanilla JavaScript in TodoMVC.
I have even mentioned stacking contexts in the article, and the thing is that they are not only introduce with z-index
, which makes them even more complex :-/ So yeah, it certainly helps if you understand them, but I think it does not make the problem less complex.
Whenever I use z-indexes, I am going to regret it at some point, especially with libraries utilizing components. Let’s see if we can avoid them all together.
I mean it is not really inline styles, with inline styles only it is e.g. not possible to implement a hover style AFAIK. I think the inventor has written a blog post explaining the steps, is that what you are referring to? I also read that, and it kinda makes sense, but basically giving up on development tools to work properly is kind of a high trade IMO.
I would also be interested in seeing a performance benchmark. As the article says, gzip will probably make the difference in terms of network traffic negligible, but it would be interesting to see the impact it has on parsing HTML.