Bruno Sartori
blog/projects/

About Me

I am an experienced Front End Programmer with expertise in developing complex front-end solutions. I specialize in working with modern technologies such as ReactJS, React Native, NextJS, SASS, and TypeScript.

My skills include the development of multi-tenant platforms, OTT applications, and Chromecast integration. I have successfully contributed to major projects involving significant platforms and technologies, including Samsung Tizen and LG WebOS.

My professional background includes creating advanced features for video control, implementing analytics and tag management solutions, and optimizing SEO performance. I have a proven track record of achieving ambitious goals and enhancing user experiences through innovative web solutions.

With over 8 years of experience in web development, I am proficient in various modern technologies and methodologies. My academic background includes a postgraduate degree in Internet of Things and a degree in Systems Analysis and Development.

Latest posts

See all posts
  • How to Set Up Dependabot for Automated Dependency Management
    Dependabot Logoby Bruno Sartori Introduction Dependabot is a tool created by GitHub to automate the dependency management of your project. It monitors your project’s dependencies and automatically creates pull requests to update then when needed. Dependabot also highlights security vulnerabilities in your project’s dependencies, helping you to prevent potential risks in your code. Why Use Dependabot? Here are some key benefits of using Dependabot: Regular Automated Updates: Dependabot consistently reviews for vulnerable dependencies and sends pull requests (PRs) to keep them up to date. Enhanced Productivity: Streamlining the management of dependencies allows your team to dedicate their efforts to developing features of constantly monitoring package changes. Ensuring Compatibility: With Dependabots feature to test updates in a setup using pull requests (PR) you can confirm if the changes are compatible, before integrating them. How to set up Dependabot Follow the steps above to set up Dependabot in your project: 1. Enable Dependabot for Your Repository To begin using Dependabot, navigate to the repository where you want to enable it: Go to the “Security” tab in your repository. In the left sidebar, click “Dependabot Alerts” or “Dependabot Security Updates” (if available). Enable Dependabot by clicking “Enable security updates” if it’s not already turned on. Once enabled, Dependabot will begin monitoring your project for dependency updates and security vulnerabilities. 2. Add a Dependabot Configuration File To customize how Dependabot operates, you can create a configuration file (dependabot.yml). This file defines which dependencies Dependabot should monitor and how often it should check for updates. Here’s how to create the dependabot.yml file: In your repository, navigate to the root directory. Create a new folder called .github. Inside the .github folder, create a file named dependabot.yml. Now, let’s configure it. Below is an example of a dependabot.yml file for a JavaScript project: version: 2updates: - package-ecosystem: "npm" # Type of dependencies (npm, pip, bundler, etc.) directory: "/" # Directory where the dependencies file is located schedule: interval: "daily" # Frequency of checking updates (daily, weekly, monthly) ignore: - dependency-name: "lodash" # Ignore specific dependencies (optional) versions: ["4.17.15"] Key Components: package-ecosystem: Specify the type of dependencies you’re using (e.g., npm, pip, gradle). directory: The location of your dependencies file (like package.json, requirements.txt, etc.). schedule: Choose Dependabot’s update check frequency (daily, weekly, or monthly). ignore: List any dependencies you want Dependabot to skip for updates. 3. Configure Additional Settings (Optional) Dependabot also allows you to configure additional settings: Versioning Rules: You can specify versioning constraints to control which versions of a dependency should be updated. Security Updates Only: If you only want Dependabot to notify you of security-related updates, you can configure it to create PRs exclusively for security vulnerabilities. For example, to restrict Dependabot to only update dependencies with security vulnerabilities, you can add the following to your configuration file: security-updates-only: true 4. Reviewing and Merging Pull Requests Once Dependabot detects an available update, it will automatically open a pull request. The PR will include information about the update, such as the version number and a summary of changes. You can review the changes, run your tests, and merge the PR if everything looks good. You can also configure automatic merging for Dependabot PRs by enabling the auto-merge feature in your repository settings or through your Dependabot configuration. 5. Monitor and Manage Dependabot Activity You can monitor all of Dependabot’s activity and updates through the GitHub Security tab. It will provide an overview of open PRs, dependency updates, and security alerts. You can also configure notifications to receive alerts directly in your GitHub dashboard. Common Use Cases for Dependabot Updating npm Packages in JavaScript Projects: Dependabot regularly checks the package.json and package-lock.json files and opens PRs to keep dependencies up to date. Maintaining Python Dependencies: For Python projects using requirements.txt, Dependabot helps ensure your dependencies are current and secure. Managing Ruby Gems: Dependabot works with Gemfile and Gemfile.lock for Ruby projects to automate gem updates. Monitoring Dockerfiles: Dependabot can also be used to update Docker dependencies listed in Dockerfiles. Best Practices for Using Dependabot Test Updates Thoroughly: Ensure that any dependency updates are thoroughly tested in your CI/CD pipeline before merging to prevent breaking changes. Monitor Security Alerts: Act on Dependabot’s security alerts promptly to patch vulnerabilities as soon as possible. Ignore Unnecessary Updates: If certain dependencies don’t require frequent updates (e.g., if they rarely change or are pinned for specific reasons), consider adding them to the ignore list to reduce noise. Conclusion Dependabot simplifies dependency management by automatically updating and securing your project whenever necessary. This guide’s instructions will help you configure Dependabot in your GitHub repositories for monitoring and updating dependencies, allowing you to receive automated pull requests and security notifications via Dependabot’s services to keep your project secure and efficient. Incorporating Dependabot into your workflow is a choice to boost productivity and reduce the risk of security vulnerabilities in your codebase.
  • How to add Structured Data Markup to your Website
    Image generated by AIby Bruno Sartori Introduction Search Engine Optimization (SEO) is crucial for any business who wants to be successful in the online world. One powerful tool that has emerged in recent years is structured data markup, which enables search engines to better understand and interpret website’s content. This leads to better visibility and more informative displays in search results. In this article, we’ll dive into what structured data markup is, its relevance for SEO, and how to implement it on your website. What is Structured Data Markup and Why is it Important for SEO? Structured Data Markup is a standardized way to provide metadata about a webpage’s content. By utilizing structured data, search engines like Google can better grasp the context of your page and showcase it more clearly in search results. Often represented in JSON-LD format, structured data can describe a variety of content types like articles, recipes, videos, and products. Implementing structured data can result in rich results, which enhance your search listing by displaying additional details like images, ratings, and prices. These rich results are more visually appealing and can greatly increase your click-through rate (CTR) and user engagement. For example, according to Google Search Central: Rotten Tomatoes applied structured data to 100,000 pages and saw a 25% increase in CTR for those pages. The Food Network enabled search features on 80% of its pages, leading to a 35% boost in visits. Rakuten noted users spent 1.5x more time on pages with structured data and had a 3.6x higher interaction rate on AMP pages with search features. Nestlé reported an 82% increase in CTR for pages that appeared as rich results compared to non-rich result pages. Key Benefits of Structured Data: Improved Search Appearance: Structured data enhances your site’s appearance in search results with rich snippets that include additional content like reviews, recipes, and events. Higher CTR: Rich snippets attract more clicks as users can preview more information directly in the search results. Optimized for Voice Search: Structured data is key for voice search, allowing search engines to provide more accurate responses. Better Content Relevance: Helps search engines understand and categorize your content, potentially improving your rankings. Common Types of Structured Data Here are some widely used types of structured data that help optimize SEO: Product Data: For eCommerce websites, it displays product details like pricing, availability, and customer reviews. Article: Ensures news articles or blog posts appear in search results with added features like images and publication dates. Breadcrumbs: Shows the structure of the site, improving navigation for users. Event: Highlights details about upcoming events, including time and location. Recipe: Ideal for food-related content, allowing users to see images, preparation times, and reviews. Video: Makes video content stand out with thumbnails, play times, and other visual enhancements. FAQ: Displays frequently asked questions directly in search results, making it easier for users to get quick answers. Google’s Recipe Strucrtured Data Markup resultHow to Implement Structured Data on Your Website Structured data can be added using several formats, including JSON-LD, Microdata, and RDFa. JSON-LD is often the preferred method as it’s simpler to implement and maintain. Steps for Implementing Structured Data: Select the Structured Data Type: Choose the type of structured data that fits your content. Google’s Structured Data Markup Helper is a helpful resource for picking the right schema. Generate the Markup: Once you have the type of data you need, generate the JSON-LD code using tools like Google’s Structured Data Testing Tool or Schema.org. Add the Markup to Your Website: Insert the generated JSON-LD code into the <head> section or body of your HTML. For example, here’s a JSON-LD snippet for a recipe page: <script type="application/ld+json">{ "@context": "<https://schema.org/>", "@type": "Recipe", "name": "Chocolate Chip Cookies", "author": { "@type": "Person", "name": "John Doe" }, "datePublished": "2024-09-17", "description": "A delicious chocolate chip cookie recipe.", "recipeIngredient": [ "1 cup sugar", "2 cups flour", "1 cup chocolate chips" ], "recipeInstructions": [ "Preheat oven to 350 degrees F.", "Mix sugar, flour, and chocolate chips.", "Bake for 10-12 minutes." ], "aggregateRating": { "@type": "AggregateRating", "ratingValue": "5", "reviewCount": "30" }}</script> 4. Test the Markup: Before going live, use tools like Google’s Rich Results Test to check for any errors. 5. Monitor Performance: After implementation, track how your structured data is performing in Google Search Console. It provides data on what structured data is being used and any issues that may arise. Best Practices for Structured Data: Use Valid Schema: Always ensure that your structured data follows the standards laid out by Schema.org. Relevance is Key: Only add structured data that accurately represents your content. Avoid manipulating search results with irrelevant data. Regular Updates: Keep your structured data up-to-date, especially for time-sensitive elements like prices or product availability. Avoid Overloading: Focus on including structured data that enhances the user experience without overstuffing your content. Conclusion Structured data markup is an essential part of SEO strategy today. It helps search engines better understand and present your content, increasing your chances of appearing in rich results. By implementing structured data correctly and following best practices, you can improve your site’s visibility, increase traffic, and enhance its overall performance in search results. If you’re looking to elevate your website’s SEO, incorporating structured data is a crucial step! References https://developers.google.com/search/docs/appearance/structured-data/search-gallery https://developers.google.com/search/docs/appearance/structured-data/intro-structured-data https://developers.google.com/search/docs/appearance/structured-data
  • How to create and publish an NPM unscoped and scoped package with Typescript
    by Bruno Sartori Prerequisites Have Node.js installed, a good option for managing multiple versions of Node.js on your PC is to install it via NVM. It also has a Windows version. An account created on https://www.npmjs.com 1. Initialize a new Project Create a new directory and initialize NPM: npm init -y This will create an initial package.json file 2. Install dependencies The dependencies we will be using are: typescript: TypeScript adds optional types to JavaScript that support tools for large-scale JavaScript applications for any browser, for any host, on any OS. @types/node: TypeScript definitions for Node.js ts-node: a TypeScript execution engine and REPL for Node.js. It JIT transforms TypeScript into JavaScript, enabling you to directly execute TypeScript on Node.js without precompiling. Use the following command to install them: yarn add -D typescript @types/node ts-node After finish installing the dependencies, initialize TypeScript with: npx tsc --init This will create a default tsconfig.json file. 3. Write your first TypeScript code Create a src folder and a file named index.ts mkdir srccd srctouch index.ts Inside your index.ts file, write a simple function const triangleArea = (base: number, height: number): number => { return (base * height) / 2;};export default triangleArea; 4. Building the Project In your tsconfig.json file: update the outputDir property to your desired build directory update the include property so that TypeScript does not compile undesired files to your build directory update the exclude property so that TypeScript skip them when resolving include { "compilerOptions": { ... "outDir": "./dist", ... }, "include": ["src"], "exclude": ["node_modules", "dist"] ...} Update your package.json file to setup your build script { ... "scripts": { "build": "tsc", ... }, ...} 5. Setting things up for publishing In your package.json file, use the files property to set the directory that should be included in your NPM release and the main and types property to determine the entry point of your project { ... "main": "dist/index.js", "types": "dist/index.d.ts", "files": ["dist"], ...} Don’t forget to build your project before publishing, you can configure NPM’s prepublishOnly script in your package.jsonto avoid forgetting to build every time you need to publish your package { ... "prepublishOnly": "npm run build" ...} 6. Publishing to NPM Before publishing, login to NPM npm login Publish your package: npm publish Congratulations, you have successfully created your first NPM package! To see your published package visit https://npmjs.com/package/your-package-name. After the first publish, be aware that NPM does not let you publish your package if you don’t increment your version in package.json . A good way to always maintain your version updated is to set up a pre-commit hook that remembers you to increment your version when committing your files, you can read more about this HERE. 7. Publish your package under an Organization Scope Publishing an NPM package under an organization scope can be very advantageous to avoid naming collisions, creating private packages, etc. 7.1. Create an Organization on NPM To create an organization scoped NPM package, first you need to create an organization. Go to https://www.npmjs.com, click in the profile icon on the right top side of the screen and click in + Add Organization. Add Organization button on npmjs.comType the name of your organization and click in the Create button on Free option, which allows you to create unlimited public packages, or, if you want to create private packages, you can buy the private version. Creating a new free organization on npmjs.com - step 1In the next screen, you can invite other people to your organization by typing their npm username or email. We will skip this step by clicking in the button Skip this for now. Creating a new free organization on npmjs.com — step 2After this step, the organization will be successfully created. New organization successfully created7.2. Create an organization scoped NPM package To create an organization scoped NPM package, after creating the folder for your new project, type: npm init --scope=@your-organization-name Continue answering the npm prompts to create a package.json file. You should create the package name with the following format: @your-organization-name/package-name. For example: @bsartori/weeb-logger Note: If you already have a NPM project created you don’t need to do npm init again, just change the name property in your package.json file to @your-organization-name/package-name format and you are good to go. 7.3. Publish your organization scoped NPM package Don’t forget to login on NPM: npm login Then publish your package passing the access privileges for the package depending of the organization type you have created: npm publish --access public To see your public package, visit https://npmjs.com/package/@your-organization-name/package-name, for example: https://www.npmjs.com/package/@bsartori/weeb-logger NPM package published over an organization scopeConclusion Concluding the process of creating and publishing an NPM package, especially using TypeScript, is a crucial step for developers who want to share their solutions with the community or for private use in their projects. By following this guide, you have learned how to set up your development environment, create a package using best practices, and publish it on NPM, ensuring it is globally accessible. Additionally, exploring publishing under an organization scope provides an extra layer of control and organization for companies and teams. With these skills, you are now equipped to create, manage, and distribute packages efficiently, while maintaining a continuous and automated workflow for future releases. References https://docs.npmjs.com/creating-and-publishing-unscoped-public-packages https://docs.npmjs.com/creating-and-publishing-scoped-public-packages
  • Mastering NPM Link: Simplifying Local Dependency Management
    Image generated by AIby Bruno Sartori When creating NPM packages, you will eventually need to debug your package by installing it in another NodeJS project to see how your package behaves as a dependency. Well, you could do this by publishing your NPM package and re-installing it as a dependency in your project every time or copying your package into your project’s node_modules but… this would be a pain in the ass to say the least 😅. This is where npm link comes into play. What NPM Link does? npm link is used to create a symlink between the library dist directory and the application node_modules directory and add it to the application’s package.json file as a dependency. This way you can use your local project as a dependency of another project without having to publish it or manually copy it to your node_modules. Setup your package for linking Before linking your package as a dependency using npm link, be sure you have configured the main and files properties in your package.json file and that you have built your project. { ... "main": "dist/index.js", "files": ["dist"], ...} For more information on how to properly create and setup an NPM package, you can read my article How to create and publish an NPM unscoped and scoped package with Typescript. Using NPM Link To link a package with npm link you simply need to follow these two steps: Go to the root directory of the package to be linked by another project (the package that will be used as a dependency by another project) and type npm link. Go to the root directory of the project that will use your package as a dependency and type npm link [your-package-name] And that’s it, that’s really all it is. After that you can edit and build your dependency while running your main project and it will automatically update so you can properly debug things. Let’s see this working with an actual example using my package Weeb Logger. Real World Example I’ve created two directories, one with my NPM package called weeb-logger which is a logging tool that displays log information directly in the application so I can see logs in applications without having to open DevTools. This can be usefeul for debbugging and another with a react application created with create-react-app. Then I go to the weeb-logger project which will be used as a dependency and do npm link. And then on test-react-application project I type npm link @bsartori/weeb-logger which is the name of my package with an organization scope. As you can see, in my directory structure we already have @bsartori/weeb-logger inside the node_modules folder. Notice that the folder icon indicates that this folder is a symlink. Then, in my test-react-application project, I import my dependency using import logger from '@bsartori/weeb-logger';. Notice that when hovering your mouse over the dependency name you can see it’s original pathname: After importing my dependency, I just do some configuration which the dependency needs and call my logging function. Finally, save the file and see the results on your browser. Conclusion In conclusion, npm link is a powerful and convenient tool for local development of NPM packages. It allows you to streamline the testing and debugging process without the need for constantly publishing or copying your package into the project’s node_modules. By creating a symlink, you can easily update and test your package in real-time, making development more efficient. Whether you're working on small utilities or larger libraries, incorporating npm link into your workflow can save time and effort, allowing for smoother integration between projects. Check Out My Other Articles If you liked this guide, you might enjoy some of my other posts where I share more tips and tricks for devs: Using Husky to Help You Avoid F****ing Up Semantic VersioningA look at how Husky can save you from versioning headaches and keep your workflow smooth. A Practical Guide to Semantic Versioning: How and When to Update Your VersionsQuick tips on when to bump your versions and how to avoid common mistakes. How to Highlight Your GitHub Repositories on LinkedInA simple guide to show off your GitHub work on LinkedIn.
  • Using Husky to help you avoid f****ing up semantic versioning
    Image generated by AIby Bruno Sartori Introduction The use of Semantic Versioning is very important to keep your users informed about changes that may impact how they interact with the software, maintain compatibility between libraries, facilitate collaboration among teams by reducing conflicts and communication failures regarding the current state of the software, among many other things. That said, keeping semantic versioning flawless can be quite a challenge, either because not everyone who contributes to the code practices it correctly — such as incorrectly changing the MAJOR, MINOR, and PATCH components — or because we may simply forget to update the version before committing or pushing the code to the repository. In any case, manual processes are always prone to human error, so it would be interesting if we could create an automated way to validate whether a version bump is necessary to prevent potential issues. Fortunately, Git has hooks, which are scripts that Git automatically runs before or after certain events like commit or push. We can leverage them to alert us to possible changes that break code compatibility, keeping semantic versioning neat and tidy 😊. What is Husky? Husky is an NPM package that makes it easy to integrate Git Hooks into your project. It can be used to automate tasks such as running tests, linters, etc. It is extremely fast and weighs only 2kb. Additionally, with Husky, we can create hooks using POSIX shell scripts. How to install Husky Use your favorite package manager to install the dependency: yarn add --dev husky Use NPX to automatically setup husky for you: npx husky init This will create a pre-commit script in .husky/ folder and updates prepare script in your package.json file. Boom! its all good to go. Creating a pre-commit hook to validate possible breaking changes in the code and prevent us from pushing them without incrementing MAJOR version. On .husky/ folder, open the pre-commit file that Husky has already created for you and paste the following code. Don’t worry, we will talk about what it’s doing in a second. yellow='\\033[0;33m'green='\\033[0;32m'blue='\\033[0;34m'red='\\033[0;31m'no_color='\\033[0m'ABORT_IF_ANY_VERSION_WAS_NOT_UPDATED=0ABORT_IF_MAJOR_VERSION_WAS_NOT_UPDATED=1POTENTIALLY_BREAKABLE_CHANGES=0set -o nounsetcompare_strings() { old_string="$1" new_string="$2" # Initialize the resulting string result="" # Initialize the indexes i=0 j=0 # Traverse the new string and compare it with the old string while [ $i -lt ${#new_string} ]; do new_char="${new_string:i:1}" old_char="${old_string:j:1}" # If the character from the new string is equal to the old one, add it without highlight if [ "$new_char" = "$old_char" ]; then result="$result$new_char" i=$((i+1)) j=$((j+1)) else # If the character from the new string is not in the old string, highlight it in green if [ "$new_char" != "$old_char" ] && [ ! "$new_char" = "$old_char" ]; then result="$result${green}${new_char}${no_color}" i=$((i+1)) else # Add characters from the old string until finding the matching character while [ "$new_char" != "$old_char" ] && [ $j -lt ${#old_string} ]; do result="$result${red}${old_char}${no_color}" j=$((j+1)) old_char="${old_string:j:1}" done fi fi done # Add the remaining characters from the old string, if any while [ $j -lt ${#old_string} ]; do result="$result${red}${old_string:j:1}${no_color}" j=$((j+1)) done # Return the result echo -e "${result}"}printf "${blue}Initializing Husky${no_color}\\n"REPO_ROOT=$(git rev-parse --show-toplevel)SITE_CHANGES=$(git status -s "$REPO_ROOT" | wc -l)printf "Detected ${yellow}$SITE_CHANGES${no_color} changes\\n"if [ "$SITE_CHANGES" -gt 0 ]; then # Check files for function signature changes CHANGED_FILES=$(git diff --cached --name-only --diff-filter=ACM | grep -E '\\.(ts|tsx|js|jsx)$') if [ -n "$CHANGED_FILES" ]; then printf "${blue}Checking TypeScript files for potentially breakable changes...${no_color}\\n" # Define the regex pattern and dont ask me how I get this right (hint: ends with "gpt") REGEX="function\\s+[a-zA-Z_$][0-9a-zA-Z_$]*\\s*\\([^)]*\\)(?:\\s*:\\s*[a-zA-Z_$][0-9a-zA-Z_$]*)?\\s*(?:;|\\{)|[a-zA-Z_$][0-9a-zA-Z_$]*\\s*=\\s*\\([^)]*\\)(?:\\s*:\\s*[a-zA-Z_$][0-9a-zA-Z_$]*)?\\s*=>\\s*[^\\{]*?(?:;|\\s*)|(?:public|private|protected)?\\s*[a-zA-Z_$][0-9a-zA-Z_$]*\\s*\\([^)]*\\)(?:\\s*:\\s*[a-zA-Z_$][0-9a-zA-Z_$]*)?\\s*(?:;|\\{)" # Iterate over staged TypeScript files for FILE in $CHANGED_FILES; do if [[ -f "$REPO_ROOT/$FILE" ]]; then printf "Checking file ${yellow}$FILE${no_color}...\\n" # Get added and removed changes STAGED_ADDITIONS=$(git diff --cached "$REPO_ROOT/$FILE" | grep -E "^\\+[^+]" | sed 's/^\\+//') STAGED_REMOVALS=$(git diff --cached "$REPO_ROOT/$FILE" | grep -E "^\\-[^-]" | sed 's/^\\-//') ADDITIONS_MATCHED="" REMOVALS_MATCHED="" # Check if STAGED_ADDITIONS is not empty and execute grep if not if [ -n "$STAGED_ADDITIONS" ]; then # Capture the part that matches the regex ADDITIONS_MATCHED=$(printf "%s" "$STAGED_ADDITIONS" | grep -P -o "$REGEX" || true) fi # Check if STAGED_REMOVALS is not empty and execute grep if not if [ -n "$STAGED_REMOVALS" ]; then # Capture the part that matches the regex REMOVALS_MATCHED=$(printf "%s" "$STAGED_REMOVALS" | grep -P -o "$REGEX" || true) fi if [ -n "$ADDITIONS_MATCHED" ]; then printf "${red}Signature changes detected in ${yellow}$FILE${no_color}. Showing changes:\\n" # Call the function and display the result compare_strings "$REMOVALS_MATCHED" "$ADDITIONS_MATCHED" POTENTIALLY_BREAKABLE_CHANGES=1 fi fi done if [ "$POTENTIALLY_BREAKABLE_CHANGES" -eq 1 ]; then printf "Checking to make sure package version was updated...\\n" if [ "$ABORT_IF_ANY_VERSION_WAS_NOT_UPDATED" -eq 1 ]; then VERSION_CHANGED=$(git diff -G '"version":' --cached package.json | wc -l) if [ "$VERSION_CHANGED" -gt "0" ]; then printf "${green}Version was updated! Continuing...${no_color}\\n" else printf "${red}Version was not updated :( Aborting commit.${no_color}\\n" exit 1 fi elif [ "$ABORT_IF_MAJOR_VERSION_WAS_NOT_UPDATED" -eq 1 ]; then CURRENT_VERSION=$(grep -oP '"version":\\s*"\\K[0-9]+\\.[0-9]+\\.[0-9]+"' package.json | tr -d '"') CURRENT_MAJOR=$(echo "$CURRENT_VERSION" | cut -d'.' -f1) STAGED_VERSION=$(git diff --cached package.json | grep -oP '"version":\\s*"\\K[0-9]+\\.[0-9]+\\.[0-9]+"' | tr -d '"') STAGED_MAJOR=$(echo "$STAGED_VERSION" | cut -d'.' -f1) if [ -n "$STAGED_MAJOR" ]; then # Check if the MAJOR version has changed if [ "$CURRENT_MAJOR" != "$STAGED_MAJOR" ]; then printf "${green}MAJOR version was updated! Continuing...${no_color}\\n" else printf "${red}MAJOR version was not updated :( Aborting commit.${no_color}\\n" fi else printf "${red}MAJOR version was not updated :( Aborting commit.${no_color}\\n" exit 1 fi fi fi fifi Now let’s dive into this code for a minute and see what it is doing. 1. Script Initialization yellow='\\033[0;33m'green='\\033[0;32m'blue='\\033[0;34m'red='\\033[0;31m'no_color='\\033[0m'ABORT_IF_ANY_VERSION_WAS_NOT_UPDATED=0ABORT_IF_MAJOR_VERSION_WAS_NOT_UPDATED=1POTENTIALLY_BREAKABLE_CHANGES=0set -o nounset First, we create some variables for changing the color of the output. The two flags ABORT_IF_ANY_VERSION_WAS_NOT_UPDATED and ABORT_IF_MAJOR_VERSION_WAS_NOT_UPDATED let’s you have some control of when to abort the push process. POTENTIALLY_BREAKABLE_CHANGES is a flag to identify if the script has found a breakable change in potential. set -o nounset: This enables a mode where using undeclared variables will trigger an error. It helps avoid accidental errors by ensuring that all variables are defined before being used. 2. compare_strings Function compare_strings() { old_string="$1" new_string="$2" # Initialize the resulting string result="" # Initialize the indexes i=0 j=0 # Traverse the new string and compare it with the old string while [ $i -lt ${#new_string} ]; do new_char="${new_string:i:1}" old_char="${old_string:j:1}" The compare_strings function will be used to print what was changed in function`s signatures of the staged files. The while loop iterates through each character of the new_string and compares it to the corresponding character in the old_string. new_char and old_char hold the current characters from each string, based on their indexes. if [ "$new_char" = "$old_char" ]; then result="$result$new_char" i=$((i+1)) j=$((j+1))else if [ "$new_char" != "$old_char" ] && [ ! "$new_char" = "$old_char" ]; then result="$result${green}${new_char}${no_color}" i=$((i+1)) else while [ "$new_char" != "$old_char" ] && [ $j -lt ${#old_string} ]; do result="$result${red}${old_char}${no_color}" j=$((j+1)) old_char="${old_string:j:1}" done fifi If the current characters from both strings match, the character from the new_string is appended to the result without any highlighting. The indexes i and j are incremented to move to the next character in both strings. If the current character from the new_string differs from the old_string, the character from new_string is highlighted in green (using green and no_color) and added to the result. The index i is incremented to move to the next character of the new_string. If the characters don’t match and the new_char is not present in the old_string, the function iterates through the old_string until a match is found, highlighting the unmatched characters in red (indicating they have been removed). j is incremented while traversing the old_string. while [ $j -lt ${#old_string} ]; do result="$result${red}${old_string:j:1}${no_color}" j=$((j+1))doneecho -e "${result}" After comparing all characters in the new_string, if there are remaining characters in the old_string, they are appended to the result highlighted in red to indicate they were removed. Finally, the function prints the result, which contains the compared strings with characters highlighted to indicate additions (green) or removals (red). 3. Finding updates in function’s signature printf "${blue}Initializing Husky${no_color}\\\\n"REPO_ROOT=$(git rev-parse --show-toplevel)SITE_CHANGES=$(git status -s "$REPO_ROOT" | wc -l)printf "Detected ${yellow}$SITE_CHANGES${no_color} changes\\\\n"if [ "$SITE_CHANGES" -gt 0 ]; then # Check files for function signature changes CHANGED_FILES=$(git diff --cached --name-only --diff-filter=ACM | grep -E '\\\\.(ts|tsx|js|jsx)$') if [ -n "$CHANGED_FILES" ]; then printf "${blue}Checking TypeScript files for potentially breakable changes...${no_color}\\\\n" # Define the regex pattern and dont ask me how I get this right (hint: ends with "gpt") REGEX="function\\\\s+[a-zA-Z_$][0-9a-zA-Z_$]*\\\\s*\\\\([^)]*\\\\)(?:\\\\s*:\\\\s*[a-zA-Z_$][0-9a-zA-Z_$]*)?\\\\s*(?:;|\\\\{)|[a-zA-Z_$][0-9a-zA-Z_$]*\\\\s*=\\\\s*\\\\([^)]*\\\\)(?:\\\\s*:\\\\s*[a-zA-Z_$][0-9a-zA-Z_$]*)?\\\\s*=>\\\\s*[^\\\\{]*?(?:;|\\\\s*)|(?:public|private|protected)?\\\\s*[a-zA-Z_$][0-9a-zA-Z_$]*\\\\s*\\\\([^)]*\\\\)(?:\\\\s*:\\\\s*[a-zA-Z_$][0-9a-zA-Z_$]*)?\\\\s*(?:;|\\\\{)" for FILE in $CHANGED_FILES; do if [[ -f "$REPO_ROOT/$FILE" ]]; then printf "Checking file ${yellow}$FILE${no_color}...\\\\n" STAGED_ADDITIONS=$(git diff --cached "$REPO_ROOT/$FILE" | grep -E "^\\\\+[^+]" | sed 's/^\\\\+//') STAGED_REMOVALS=$(git diff --cached "$REPO_ROOT/$FILE" | grep -E "^\\\\-[^-]" | sed 's/^\\\\-//') if [ -n "$ADDITIONS_MATCHED" ]; then printf "${red}Signature changes detected in ${yellow}$FILE${no_color}. Showing changes:\\\\n" compare_strings "$REMOVALS_MATCHED" "$ADDITIONS_MATCHED" POTENTIALLY_BREAKABLE_CHANGES=1 fi fi done REPO_ROOT is set to the root directory of the git repository using git rev-parse --show-toplevel. SITE_CHANGES counts the number of changes detected in the repository using git status and wc -l (to count the lines in the output). If changes are detected in the repository, the script looks for staged files that match the extensions .ts, .tsx, .js, or .jsx using git diff --cached. These files are stored in CHANGED_FILES. If there are changed files, the script informs the user and defines a REGEX pattern to identify function signatures, arrow functions, and class methods in TypeScript or JavaScript files. The script then iterates over each file in CHANGED_FILES, checking if it exists in the repository root. STAGED_ADDITIONS and STAGED_REMOVALS capture the added and removed lines from the staged files by filtering lines starting with + or -. If there are any function signature changes detected, the compare_strings function is called to highlight the differences between the added and removed function signatures and sets the flag POTENTIALLY_BREAKABLE_CHANGES to 1 (true). 4. Check for Version Updates if [ "$POTENTIALLY_BREAKABLE_CHANGES" -eq 1 ]; then printf "Checking to make sure package version was updated...\\n" if [ "$ABORT_IF_ANY_VERSION_WAS_NOT_UPDATED" -eq 1 ]; then VERSION_CHANGED=$(git diff -G '"version":' --cached package.json | wc -l) if [ "$VERSION_CHANGED" -gt "0" ]; then printf "${green}Version was updated! Continuing...${no_color}\\n" else printf "${red}Version was not updated :( Aborting commit.${no_color}\\n" exit 1 fi elif [ "$ABORT_IF_MAJOR_VERSION_WAS_NOT_UPDATED" -eq 1 ]; then CURRENT_VERSION=$(grep -oP '"version":\\s*"\\K[0-9]+\\.[0-9]+\\.[0-9]+"' package.json | tr -d '"') CURRENT_MAJOR=$(echo "$CURRENT_VERSION" | cut -d'.' -f1) STAGED_VERSION=$(git diff --cached package.json | grep -oP '"version":\\s*"\\K[0-9]+\\.[0-9]+\\.[0-9]+"' | tr -d '"') STAGED_MAJOR=$(echo "$STAGED_VERSION" | cut -d'.' -f1) if [ -n "$STAGED_MAJOR" ]; then # Check if the MAJOR version has changed if [ "$CURRENT_MAJOR" != "$STAGED_MAJOR" ]; then printf "${green}MAJOR version was updated! Continuing...${no_color}\\n" else printf "${red}MAJOR version was not updated :( Aborting commit.${no_color}\\n" fi else printf "${red}MAJOR version was not updated :( Aborting commit.${no_color}\\n" exit 1 fi fifi If potencial breakable changes are detected, the script checks for the two flags for defining when to abort the process. if ABORT_IF_ANY_VERSION_WAS_NOT_UPDATED is set to true, it checks whether the package.json version was updated. If not, the commit is aborted with a message. if ABORT_IF_MAJOR_VERSION_WAS_NOT_UPDATED is set to true, it extracts the CURRENT_VERSION of the package.json file using grep. The regular expression captures the version (e.g., 1.2.3), and tr -d '"' removes the surrounding double quotes. Then it extracts the CURRENT_MAJOR version part (e.g., 1 from 1.2.3) using cut -d'.' -f1, which splits the version by dots and selects the first part. It does the same thing for the STAGED_VERSION and STAGED_MAJOR but using git diff --cached to capture the differences in package.json file. Finally, it compares the current and staged major versions. If they are different, it means the MAJOR version was updated correctly, and the script prints a success message and continues. If the major versions are the same, it prints an error message and aborts the commit process with exit 1. Using our Hook To verify the hook running, change some random function’s signature like this: Then, run the command git add --all and commit try to commit them using git commit -m 'my hook test', the hook will execute and you will see the following message: And that’s it! the commit process will be aborted and you will not be able to commit changes until you increment the MAJOR version in your package.json file. If you do that, the hook will allow you to commit like this: Improving the script As much as this script can help you manage semantic versioning, it is still very rudimentary, and there is room for several improvements. Some possible enhancements include: Better analyzing the tested function to understand if it directly interacts with the end user of the code, increasing the certainty that changing its signature constitutes a breakable change. Looking for changes in API route addresses. Searching for changes in the contract of input and output types. If you liked this script and would like to help improve it, feel free to make modifications. And, if you’d like, send me your changes so I can also benefit from your improvements 😂 For now, this script is hosted in a Gist, but I could create a repository where you can submit a PR, allowing us to track contributions from collaborators. Conclusion Integrating Husky with Git hooks allows us to automate the process of checking for potentially breaking changes in code and enforcing semantic versioning best practices. By utilizing scripts like the one demonstrated, we can identify updates in function signatures and ensure that the MAJOR version is incremented when necessary, reducing the risk of introducing breaking changes without proper versioning. This approach helps maintain a clean, organized versioning system while minimizing human error, allowing teams to collaborate more effectively and safely push code changes without compromising software stability. Further Reading Don’t forget to check out my article on Semantic Versioning. And if you’re curious about how to showcase your GitHub repositories on LinkedIn, this one’s for you! If you enjoyed this article, please leave that naughty like and if you got questions drop them in the comments 🙌

Recent projects

See all projects
  • kitsune-server
  • kitsune
  • github-medium-rss
    Show your recent published articles from Medium on your GitHub README.

Let’s Connect

If you want to get in touch with me about something or just to say hi, reach out on social media or send me an email.

  • X (formerly Twitter)/
  • GitHub/
  • LinkedIn/
  • brunosartori.dev@gmail.com
© 2024 • Bruno Sartori
Press Esc or click anywhere to close