The ultimate developer guide in 2025 💻
October 2025
This blog is also available in Spanish: La guía definitiva para desarrolladores en 2025 💻
If you know my writing style, you know I don't like fluff. I neither want to waste your time nor provide you low-value content. No click-bait titles - if I name this blog "The ultimate developer guide in 2025", it's because I genuinely believe it is the ultimate guide.
Who is this guide for?
- The beginner looking to jump into the tech market
- The junior who feels stuck, not knowing what to learn or do next
- The senior who is looking for a refresh on his skills
- Literally anyone that had the luck to come across this blog 😅
Intro
In this guide, I'll talk about the meta, soft, and hard skills I believe you should have as a developer. Lots of skills are transferable across domains, both tech and non-tech related, though the hard skills focus mostly on web development (both backend and frontend). For the sake of conciseness, I have decided to split very long sections into separate blogs, so you get to choose whether you need or want to read more on any of them.
Meta skills
- English: You already know this. It is no secret that the latest and best information is almost always in English, especially in tech, from documentation to interacting with co-workers. I suggest aiming for C1, or B2 as a minimum. For preparation, there are tons of YouTube videos, blogs, and even web-based online platforms where almost for free you can get all the knowledge you need, which combined with tons of practice, will help you reach at least B2 in about 6 months. Then, try getting a certificate that is somewhat recognized, for example EFSET, which is totally free and is actually a very good proxy for your English level.
- How to learn effectively: In tech, not only do you have to learn a lot of stuff, but you also have to keep up with the changes, and at an incredible pace at that. So it makes sense that you should train "the skill to learn" itself. For this, I highly suggest you read Scott Young's book Ultralearning. I have zero affiliation with him, and I genuinely consider it was a game changer with respect to how productive I became learning new skills and maintaining current ones.
- Searching effectively: Whenever you have a bug, or you're trying to learn some topic, or literally searching for anything online, you may be the type of person that goes straight to GPT and asks the big (most times suboptimal and not-so-clear) prompt. Or you may google your way through it. Either way, you should know how to search effectively, since that will take up a considerable amount of your time as a developer and 21st century person.
- Using your text editor effectively: Development is not only coding. It's mostly thinking and figuring stuff out. But when you DO code, you should be able to do so super quick, without even having to think about it, to the point that it becomes second nature to you. Become confident in your ability to code FAST. This requires learning some shortcuts, and raw practice. Don't go to the extreme of learning 200 shortcuts (ESPECIALLY YOU, VIM USERS 🥸), just to realize that all that time, effort and mental space could have been better allocated to other stuff.
Soft skills
-
Effective communication: As a developer and as a human in general, you should be able to communicate effectively. With both tech and non-tech people, in multiple languages (if required). We as devs sometimes tend to disregard the importance of communication a lot, making lame excuses like 'hard skills are more important', or 'it's better to be as concise, logical and objective as possible, always'. Even I thought like that at one point, I have to admit 🥲. The thing is that before being a developer, you are a human. This means you should not be a jerk and should be nice. Be kind to others, help them from time to time without expecting anything back, and learn how to interact socially, to properly understand others and make yourself understood. If you want to improve your communication skills, I suggest the following books:
-
Open-mindedness: Never think you know everything, even in something you are extremely proficient in. Always accept other perspectives, be open and willing to view and do stuff differently.
-
Mental strength, discipline, and work ethic: If you are somewhat as passionate as me about tech and developing stuff, motivation and passion will help you a lot. However, you won't always feel like sitting down and putting in the long hours, and grinding it out. When such times happen, you cannot rely on your passion for keeping you on track - only discipline and a strong work ethic will. It's about making a statement to yourself: when things become tough, do you want to be the one that gives up? Or do you want to achieve your goals and be responsible for your life? My suggestion for such times: do not think about it, JUST DO IT. Your future self will thank you for it, and deep down you will feel better, knowing that you did what you had to do and didn't give up. If you want to read more on this, I recommend the following:
Hard skills
-
Problem solving: This will be the most important skill you will need as a developer. Especially with the rise of AI, roles that relied heavily on someone 'just coding' have naturally started to shift towards engineers that can solve the given problem, and once they have a strong understanding of how to proceed, just use the LLM (with good prompting and models) to implement most, if not all, of the solution.
-
Basic Computer Architecture: Have a broad understanding of what a computer has and how it uses its resources. A very quick overview about the CPU, RAM, Disk, GPU, peripherals and PSU will suffice for now.
-
Strong knowledge of programming languages: Become competent in at least two programming languages. I recommend one high level language (JavaScript, Python, Java), and one low level (Go, Rust, C#). Understand the ins and outs of the language, it should become second nature to you doing stuff with it. Then take a look at the basics of other programming languages, just to get a feel of how stuff is done in them. For example, if your 'competent' programming language is JavaScript, then take a look at how the same thing can be done in Python or Java. This will make you think more in terms of 'solving the problem' rather than 'trying to use technology X to solve the problem'.
-
Data Structures and Algorithms: This might be a bit controversial, since some believe that DSA has nothing to do with what you will probably be doing in the 'real world'. And yes, it is true that with very little DSA knowledge you can get around in tech (mostly in frontend roles). However, I am 100% sure that DSA is a must to become an effective problem solver in programming. Start from the basics (Big O Notation, Arrays, Linked Lists, Dynamic Arrays, Hashmaps, Sorting Algorithms, Binary Search), to more advanced stuff (Recursion, Trees, Graphs, Dynamic Programming). During university, I was super passionate about Competitive Programming, which required tons of DSA, and from experience I can tell you that this improved my problem-solving skills tenfold, not only in programming, but in tech in general.
-
Networking: Learn how computers can actually talk to each other. Learn about the OSI Model (7 layers), network protocols (IP, TCP, UDP, HTTP), and the typical client-server model. Later on, dive deeper into HTTP, SOAP, gRPC, GraphQL, WebSockets, and WebRTC.
-
Concurrency & parallelism: Learn about the theory of concurrency vs parallelism, multi-processing and multi-threading. Most importantly, learn how to implement these concepts in your programming language. Also, know which tasks your programming language(s) are best suited for. For example, Go excels natively at both parallelism and concurrency due to lightweight goroutines and the goroutine scheduler, especially for CPU-bound workloads; whereas NodeJS handles concurrency very well, especially IO-bound workloads (like in the typical client-server model, where a UI makes requests to a server, and this server talks to a database and maybe a cache and then responds back to the client), and it is capable of parallelism with worker threads, but it just isn't the best choice when highly parallel or CPU-bound workloads are required.
-
Git and GitHub: Make yourself comfortable with version control. Learn the most important commands and how to leverage them to 'git around' properly (pun intended): add, commit, clone, fetch, pull, push, merge, stash, checkout, restore, remote, branching best practices, how to contribute to a project (either private or Open Source), how to do Pull Requests, etc.
-
Linux and bash scripting: Most servers and development workflows are in Linux, period. Main reasons for it:
- Sheer performance and resource-usage efficiency compared to Windows, due to its lightweight architecture.
- It's more cost-effective from a devops perspective, since it's open-source, meaning that there isn't any licensing bills to pay.
- Much better security and privacy: Its open-source nature allows for quick vulnerability fixes, and telemetry data collection is minimal (many distributions don't even collect your data at all)
Bottom line: DO NOT USE Windows (not even with WSL - unless you absolutely need it), and use Linux like a real developer 🤓
Backend specific
-
Servers: With your preferred programming language, learn how to implement a simple HTTP server that returns JSON. Here, you should dive deep into routing, how to handle errors gracefully, middlewares, serving content from the server (either serving from the backend depending on the route via HTTP, or creating an FTP server). Then try to implement at least once your own mini-HTTP library that takes care of creating the server (which is a TCP listener), handling multiple requests concurrently and efficiently. This will give you a bit more clarity on what HTTP libraries actually do for you, rather than blindly relying on them without knowing what they do under the hood. Next, implement at least once each of the protocols / technologies mentioned in the networking section: SOAP, gRPC, GraphQL, WebSockets, WebRTC, and additionally tRPC (in case your programming language is TypeScript) such that you get an appreciation of what each of them is, when to use them, and a mental notion of how to implement them.
-
Databases: Learn the theory of RDBMS, a brief comparison of SQL vs noSQL databases, then learn the basics of SQL queries and concepts (queries, indexes, backups, horizontal - vertical scaling of the DB). Start with either Postgres or MySQL, and then learn about NoSQL DBs (MongoDB, DynamoDB). I mean not only the theory and using their GUI or CLI, but how to use them in your programming language as well with the corresponding SDKs.
-
Docker basics: Not advanced stuff for now - just learn how to run containerized applications, either local apps of yours or public images. In the backend you will be running third-party services all the time (database, cache, web server, message brokers, etc). So instead of cluttering your OS, keep it all clean. You may learn deeper about Docker, especially if you are interested in DevOps (deploying containerized applications), but that comes later. Learn key commands like build, run, stop, start, ps, pull, how to create and manage images, how to create and manage containers, and you will be good to go. If you find yourself having more than one containerized app in your project, learn how to set up docker-compose.
-
Design patterns: There are a lot of them, and sometimes they become too academic or theoretical. So, from my experience and knowledge, these are the most important ones you should know about.
Architectural Patterns:
- MVC (Model View Controller): Allows for a clear separation of concerns, where the Model represents the domain data and business logic, the Controller handles requests and coordinates between Model and View, and the View is what is returned to the user (HTML or JSON). Often, a separate Data Access layer (Repository pattern) sits between the Model and the database.
- Repository - Service: Similar separation of concerns - the Repository abstracts data access (interacts with the DB), while the Service contains the business logic. This pattern is commonly used alongside MVC to further separate data access from business logic.
Structural Patterns:
- Adapter: Integrates incompatible interfaces by wrapping one class to match the interface expected by another, allowing them to work together without modifying their source code.
- Decorator: Extends behavior dynamically by wrapping objects with additional functionality, providing a flexible alternative to inheritance for adding features at runtime.
- Facade: Simplifies complex subsystems by providing a single, unified interface that hides the complexity of multiple classes or modules behind it.
Behavioral Patterns:
- Observer: Enables event-driven communication through pub/sub mechanism, where objects subscribe to events and get notified when state changes occur, promoting loose coupling.
- Strategy: Allows interchangeable algorithms by encapsulating them in separate classes, letting the algorithm vary independently from the code that uses it.
- Middleware: Creates a request processing pipeline where each middleware component can process, modify, or pass along requests, commonly used in web frameworks for cross-cutting concerns like authentication, logging, and error handling.
Creational Patterns:
- Factory: Handles object creation by providing a method that returns instances of classes based on input parameters, abstracting the instantiation process and making code more flexible.
- Singleton: Ensures a class has only one instance and provides global access to it. Use sparingly - often a code smell that indicates tight coupling. Consider dependency injection or stateless design instead.
In my opinion, all of these can become quite technical and feel like a lot of mumbo jumbo (and in truth, most times they are), but it's industry standard, so you should be familiar with them. I personally like to keep everything as simple as possible, and if I foresee my own headache trying to refactor in the future once the code begins to grow, then I split the code clearly (most times using MVC) without wasting too much time trying to find 'the ultimate duper super best pattern to apply', since chances are that you will end up spending more time on thinking about 'how should you code the solution' rather than actually 'solving the problem'. Solve first, optimize later if necessary.
-
Testing: Say that you built an app (or at least a piece of functionality). Is it now ready to be deployed? Of course not. Until you have tested it and made sure that it works as it is expected to work, in as many scenarios as logically possible, you have no guarantee that you built a robust solution. Don't get me wrong, you will almost never create a 100% failure-proof solution, but you can always make your code more robust by doing proper tests (Unit, Integration, and End to End), and if any fail, pinpointing and debugging the issue.
-
Caching: Great, now your app has scaled to a few thousand users and the analytics team realizes that 35% of users are quitting our site because one frequent database query took too long to respond, what can we do? We cache. Reading from RAM (cache like Redis or Memcached) is super duper faster than reading from Disk (traditional RDBMS like MySQL, or even a NoSQL database like Mongo).
-
Basic DevOps, CI/CD: Even if it's not your role, you should have an understanding of DevOps. Learn how to deploy with:
- Git (using providers like Vercel), where on every push to some branch (usually main), the provider automatically runs a workflow of getting the code, running some commands (typically a build-pipeline), and if everything is good to go, the new version is deployed.
- On a VPS (with providers like DigitalOcean, Linode, or Vultr). This will teach you a lot on interacting with the OS, managing updates, rollbacks and failures yourself. Despite not being that convenient, I strongly suggest you go through that process at least once - it will make a BIG difference.
- With a cloud provider (AWS, GCP, Cloudflare, Azure). I suggest you try at least once deploying an application to an AWS free EC2 instance, and then do the same but with the equivalent containerized application to ECS.
-
System Design: Become confident in designing (and if time allows, implementing) all sorts of software systems. Start on the core system design principles (Scalability, Reliability, Performance, and Maintainability & Observability). I suggest you brush up a bit on the following:
-
API Design: RESTful and GraphQL APIs; efficient resource structuring and versioning.
-
Database Design: Master SQL vs NoSQL, data modeling, indexing, replication, partitioning, and migrations.
-
Caching Strategies: Learn why/when to use cache, tools like Redis/Memcached, cache invalidation, and performance trade-offs.
-
Load Balancing & Servers: Concepts of distributing requests, high availability, horizontal scaling, statelessness.
-
Microservices & Event-Driven Architecture: Design loosely coupled, scalable services; message queues (Kafka, RabbitMQ).
-
Security: Auth (OAuth2, JWT, RBAC), encryption in transit and at rest, secure API design, manage vulnerabilities.
-
DevOps & CI/CD: Infrastructure as Code (Terraform, Kubernetes), pipeline automation, deployment, rollback, blue-green deploys, canary releases.
-
Monitoring & Fault Tolerance: Health-checks, distributed tracing, alerting (Prometheus, Grafana, ELK Stack), disaster recovery, backups
To learn System Design, I strongly suggest Educative.io's Grokking the System Design Interview. Again, I have no affiliation with them. At the time of this writing, educative.io costs 13 bucks a month (and I am sure you can finish the course within a month). Or if you just can't afford it, there are some good alternatives on YouTube. Or if you are ok with being a pirate 🏴☠️, then here's Educative.io – Deep Dive into System Design Interview for free.
-
-
Web servers: Your application server (Express, FastAPI, etc.) handles the business logic, but you need a web server sitting in front of it to handle HTTP requests, serve static files, handle SSL/TLS termination, load balancing, and reverse proxying. The reason for it is that web servers are optimized for that, whereas your application server might fail to handle effectively all of these tasks once you reach certain scale (few thousand concurrent users). Here are the three you should know about:
-
nginx: This is the one you'll use most of the time. It's lightweight, fast, and handles high concurrency like a champ. Most production deployments use nginx as a reverse proxy in front of their application servers. It excels at serving static content, handling SSL termination, and load balancing across multiple backend instances. The configuration syntax is clean and straightforward (once you get the hang of it), and it consumes minimal resources. Learn the basics: how to set up a reverse proxy, serve static files, configure SSL certificates, set up basic load balancing, and handle common scenarios like redirects and rate limiting.
-
Apache HTTP Server: The old-school workhorse that's been around forever. While nginx is generally preferred for modern deployments due to its better performance under high concurrency, Apache is still widely used, especially in shared hosting environments and legacy systems. It's more feature-rich out of the box (mod_rewrite, .htaccess files), but it's also heavier and doesn't handle concurrent connections as efficiently as nginx. You should know about it because you'll encounter it in many existing systems, and understanding how to configure it is still valuable.
-
Caddy: The modern, developer-friendly option that automatically handles HTTPS certificates via Let's Encrypt. If you're setting up a personal project or small service and don't want to deal with SSL certificate management, Caddy is your friend. It's written in Go, has a simple configuration format, and automatically provisions and renews SSL certificates. It's great for smaller deployments where you want to get up and running quickly without the overhead of managing certificates manually. However, for production systems at scale, nginx is still the safer bet due to its battle-tested performance and extensive ecosystem.
-
-
Message Queues: When building distributed systems, you'll often need to decouple services and handle asynchronous communication. That's where message queues come in. They allow services to send and receive messages without being directly connected, enabling better scalability, reliability, and fault tolerance. Here are the main ones you should know about:
-
Apache Kafka: The heavyweight champion for event streaming and high-throughput message processing. It's designed to handle millions of messages per second and is perfect for building event-driven architectures, log aggregation, and real-time data pipelines. Kafka uses a distributed commit log architecture, which means messages are persisted and can be replayed, making it excellent for scenarios where you need durability and the ability to process historical data. It's overkill for simple use cases, but if you're building microservices at scale or need real-time analytics, Kafka is your go-to choice.
-
RabbitMQ: The reliable, battle-tested message broker that's been around for ages. It supports multiple messaging protocols (AMQP, MQTT, STOMP) and provides features like message acknowledgments, routing, and dead letter queues out of the box. RabbitMQ is great for traditional message queuing scenarios where you need guaranteed delivery, routing flexibility, and complex message patterns. It's easier to set up and understand than Kafka, but doesn't handle the same scale of throughput. Perfect for most microservices architectures where you need reliable message delivery.
-
Redis (Pub/Sub and Streams): You already know Redis as a cache, but it can also function as a message queue. Redis Pub/Sub is lightweight and perfect for simple pub/sub patterns where you don't need message persistence (if a subscriber is offline, they miss the message). Redis Streams, on the other hand, provides message persistence and consumer groups, making it suitable for more complex scenarios. It's not as feature-rich as dedicated message brokers, but if you're already using Redis in your stack, it's a convenient option for simpler messaging needs.
-
AWS SQS: The managed message queue service from AWS. If you're already in the AWS ecosystem, SQS is the obvious choice. It's fully managed (no infrastructure to maintain), scales automatically, and integrates seamlessly with other AWS services. It supports both standard queues (high throughput, at-least-once delivery) and FIFO queues (exactly-once processing, ordered delivery). The main downside is vendor lock-in - once you're on SQS, migrating away is a pain. But for AWS-native applications, it's hard to beat.
Understanding when to use each one is crucial. For most applications starting out, RabbitMQ strikes the right balance between features and complexity. As you scale and need event streaming, Kafka becomes essential. Redis is great for simple pub/sub or when you want to leverage your existing Redis infrastructure. And SQS is perfect if you're all-in on AWS.
-
-
Search engines: When your application needs to provide fast, relevant search functionality - whether it's product search, full-text search across documents, log analysis, or building search-as-a-service features - a regular database query just won't cut it. Search engines are purpose-built for indexing, searching, and ranking large volumes of text data efficiently. They're essential for applications where search is a core feature, not just a nice-to-have. Here are the main ones you should know about:
-
Elasticsearch: The modern, distributed search engine that's become the de facto standard for search and analytics. It's built on top of Apache Lucene and shines when you need real-time search, full-text search, and analytics at scale. Elasticsearch is perfect for log aggregation (part of the ELK stack - Elasticsearch, Logstash, Kibana), product search in e-commerce, content search, and even as a NoSQL database for certain use cases. It's horizontally scalable, handles JSON documents natively, and provides a RESTful API that's easy to integrate. The main downside is complexity - it requires careful configuration and monitoring, especially at scale. It's overkill for simple search needs, but if you're building search functionality that needs to scale and perform well, Elasticsearch is your go-to choice.
-
Apache Solr: The battle-tested, enterprise-grade search platform that's been around longer than Elasticsearch. It's also built on Apache Lucene and shares many capabilities with Elasticsearch, but with a different architecture and philosophy. Solr is more XML/configuration-driven, while Elasticsearch is more JSON/API-driven. Solr excels in enterprise environments where you need advanced features like faceted search, spell checking, and complex query parsing out of the box. It's more mature in some areas and has better support for certain document formats (like PDF, Word, etc.) through Solr Cell (Tika). However, Elasticsearch generally has better performance and is easier to get started with for modern applications. Solr is still widely used in enterprise settings and is a solid choice if you're already in the Apache ecosystem or need specific Solr features.
Both Elasticsearch and Solr are powerful, but Elasticsearch has gained more traction in recent years due to its ease of use, better performance, and stronger ecosystem. For most modern applications starting out, Elasticsearch is the safer bet. Solr is still valuable to know, especially if you're working with legacy systems or need specific enterprise features.
-
-
Logging: Most people start by just logging
error with something: ${ERR}, but once your app complexity increases, getting general information about the behavior of the app or debugging from the logs becomes impossible. You'll end up drowning in a sea of unstructured, unsearchable logs with no way to trace what happened when. Proper logging is crucial for debugging, monitoring, and understanding your application's behavior in production. Here's how to approach it at different levels:First step - Use a proper logging library: Ditch the console.log and use a proper logging library for your programming language. Good logging libraries (like Winston for Node.js, Loguru for Python, or Logback for Java) provide:
- Log levels (DEBUG, INFO, WARN, ERROR, FATAL) so you can filter what matters
- Timestamps automatically added to every log entry
- Structured logging with JSON support, making logs searchable and parseable
- Context and metadata (request IDs, user IDs, etc.) that help you trace issues across services
- Configurable output formats
Instead of printing to stdout or stderr synchronously, send logs asynchronously to a file (or log aggregation service) so you don't block your main thread just writing logs. This is especially important in high-throughput applications where logging can become a bottleneck.
Next step - Log analysis (for bigger projects): Once you have structured logs, you need a way to analyze them. This is where log aggregation and analysis tools come in:
-
ELK Stack (Elasticsearch, Logstash, Kibana): The most popular open-source solution for log analysis. Elasticsearch stores and indexes your logs, Logstash processes and enriches them, and Kibana provides the visualization. It's powerful and free, but requires significant setup and maintenance. Perfect for teams that need full control and have the resources to manage it. It's overkill for small projects, but essential for production systems at scale.
-
Loki + Grafana: A more lightweight alternative to ELK. Loki is designed specifically for log aggregation and is more resource-efficient than Elasticsearch. It integrates seamlessly with Grafana (which you'll likely already be using for metrics), making it a great choice if you want logs and metrics in one place. It's easier to set up than ELK and strikes a good balance between features and complexity.
-
Splunk: The enterprise-grade commercial solution. It's feature-rich, has excellent support, and handles massive scale. But it's expensive and probably overkill unless you're working at a large enterprise with complex compliance requirements.
-
Cloud-native solutions: AWS CloudWatch Logs, Google Cloud Logging, or Azure Monitor Logs. If you're already in a cloud ecosystem, these are the obvious choices. They're fully managed, integrate with other cloud services, and scale automatically. The main downside is vendor lock-in and cost at scale.
Next step - Streaming logs (for distributed systems at scale): When you need real-time, distributed, and highly performant log analysis, you move from batch processing to streaming. Instead of writing logs to files and then processing them, you stream logs directly to a processing pipeline:
-
AWS Kinesis Data Streams: If you're on AWS, Kinesis is the go-to for streaming logs. It can handle millions of events per second, integrates with other AWS services (Lambda, Firehose, Elasticsearch), and provides real-time processing. Perfect for AWS-native applications that need real-time log analysis and alerting.
-
Elasticsearch with Beats: Part of the Elastic stack, Beats (like Filebeat) can stream logs directly to Elasticsearch in real-time. This gives you the power of Elasticsearch for log analysis with real-time streaming capabilities. It's great if you're already using the ELK stack and need real-time processing.
-
Kafka + Log Aggregation: For the most demanding scenarios, you can stream logs through Kafka (which you learned about in the Message Queues section) and then consume them for analysis. This gives you the highest throughput and most flexible processing, but adds significant complexity.
The progression is clear: start with a proper logging library, add log analysis when your project grows, and move to streaming only when you need real-time distributed log processing at massive scale. For most applications, a good logging library + ELK stack or Loki + Grafana is more than enough. Streaming is only necessary when you're dealing with millions of events per second across distributed systems. And on smaller scale apps (some internal tool for a company, used by no more than a few hundred users; or an extremely simple piece of software), you can even get by just using your logging library properly. Remember, simplicity will always beat complexity, so don't optimize prematurely by setting up a complex logging solution when you have 10 users in your app 🥸.
-
Monitoring and instrumentation: Knowing what's happening in your application in real-time is crucial. You need to know if your server is about to crash, if your database is slow, if your API response times are degrading, or if users are experiencing errors. Without proper monitoring, you're flying blind - you'll only find out about problems when users complain, and by then it's often too late. Monitoring is about collecting metrics (CPU, memory, request rates, error rates, latencies) and instrumentation is about adding code to your application to expose these metrics. Here are the main tools you should know about:
- Prometheus: The de facto standard for metrics collection and storage in the cloud-native world. It's a time-series database designed specifically for monitoring, pulling metrics from your applications and services at regular intervals. Prometheus uses a pull model (it scrapes metrics from your services) rather than a push model, which works great for dynamic cloud environments. It stores metrics as time-series data, making it perfect for tracking how things change over time. Prometheus excels at collecting metrics from your applications, databases, and infrastructure; storing time-series data efficiently; querying metrics using PromQL (Prometheus Query Language); and alerting when metrics cross thresholds (via Alertmanager).
It's open-source, battle-tested, and integrates with almost everything in the cloud-native ecosystem. The main downside is that it's not great for long-term storage (it's designed for short-term retention), but you can integrate it with other systems for long-term storage. Prometheus is the go-to choice for most modern applications, especially if you're running containers or Kubernetes.
- Grafana: The visualization layer that makes your metrics actually useful. Prometheus collects and stores metrics, but Grafana is what turns those metrics into beautiful, actionable dashboards. It's not just for Prometheus - Grafana can connect to dozens of data sources (Elasticsearch, Loki, InfluxDB, CloudWatch, etc.), making it a unified platform for visualizing all your observability data. Grafana excels at creating dashboards with graphs, charts, and tables; setting up alerts based on metrics; visualizing logs (when connected to Loki or Elasticsearch); and creating shareable dashboards for your team.
It's open-source, incredibly flexible, and has a huge community creating pre-built dashboards for common tools (databases, web servers, message queues, etc.). The combination of Prometheus + Grafana is the industry standard for monitoring modern applications. Grafana Cloud exists if you want a managed solution, but most teams run Grafana themselves.
Other tools worth mentioning:
-
Datadog/New Relic/Splunk: Commercial, fully-managed monitoring solutions. They're easier to set up (just install an agent), have excellent UIs, and handle everything for you. But they're expensive, especially at scale, and you're locked into their ecosystem. Great for teams that want monitoring without the operational overhead, but prepare to pay for it.
-
Cloud-native monitoring: AWS CloudWatch, Google Cloud Monitoring, Azure Monitor. If you're already in a cloud ecosystem, these are the obvious choices. They integrate seamlessly with other cloud services, are fully managed, and work well for cloud-native applications. The main downside is vendor lock-in and cost at scale.
The Prometheus + Grafana combination strikes the right balance between power, flexibility, and cost. It requires some setup and maintenance, but it's free, open-source, and gives you full control. For most applications, this is the way to go. Commercial solutions are worth considering if you don't want to manage infrastructure, but they come with significant costs.
-
Analytics: Understanding how users interact with your application is crucial for making data-driven decisions. You need to know which features are being used, where users drop off, what pages they visit, how long they stay, and what actions they take. Without analytics, you're building in the dark - you have no idea if your product is actually solving user problems or if users are struggling with your interface. Analytics helps you understand user behavior, optimize conversion funnels, and make informed decisions about product features. Here are the main tools you should know about:
- Google Analytics 4 (GA4): The free, industry-standard web analytics platform from Google. It's the successor to Universal Analytics (which Google sunset in 2023) and is now the go-to choice for most websites and web applications. GA4 tracks user interactions, page views, events, conversions, and provides detailed insights about your audience, acquisition channels, and user behavior. It's free (up to generous limits), integrates easily with most websites, and provides powerful segmentation and reporting capabilities. GA4 excels at tracking website and app usage across platforms; understanding user journeys and conversion funnels; analyzing traffic sources and user acquisition; creating custom events and conversions; and integrating with other Google products (Google Ads, Search Console, etc.).
The main downside is privacy concerns - it tracks users across the web, which has led to GDPR compliance issues in Europe. Also, it can be overwhelming with all its features, and the learning curve is steep. But for most applications, especially if you're just starting out, GA4 is the obvious choice because it's free and covers 90% of what you need.
- Mixpanel: The product analytics platform focused on user behavior and event tracking. While GA4 is great for general web analytics, Mixpanel is specifically designed for product teams who want to understand how users interact with specific features. It's event-based (you track specific user actions), which makes it perfect for SaaS applications where you care more about feature usage than page views. Mixpanel excels at tracking user actions and events in detail; building funnels to see where users drop off; cohort analysis to understand user retention; A/B testing and experimentation; and understanding user paths through your product.
It's more expensive than GA4 (has a free tier but pricing scales with events), but it's worth it if you're building a product where understanding user behavior is critical to your success. Mixpanel is perfect for SaaS applications, mobile apps, and any product where feature usage analytics matter more than general web traffic.
- Amplitude: Similar to Mixpanel, but with a stronger focus on behavioral analytics and product intelligence. It's designed for product teams who want to go deeper into user behavior analysis. Amplitude provides powerful features like behavioral cohorts and user segmentation; path analysis to understand user journeys; retention analysis and predictive analytics; product experimentation and A/B testing; and integration with data warehouses for deeper analysis.
It's more expensive than Mixpanel and is probably overkill unless you're a product-focused company with dedicated analytics resources. But if you're serious about understanding user behavior and making data-driven product decisions, Amplitude is one of the best tools out there.
-
Adobe Analytics: The enterprise-grade analytics solution. It's powerful, feature-rich, and handles massive scale. But it's expensive, complex, and probably overkill unless you're working at a large enterprise with complex analytics needs. Most startups and small companies will never need Adobe Analytics.
-
Self-hosted / Open-source: Plausible, Matomo (formerly Piwik), or PostHog. These are privacy-focused alternatives that don't track users across the web and can be self-hosted. They're great if privacy is a concern or if you want full control over your analytics data. The trade-off is that they're less feature-rich than commercial solutions and require you to manage the infrastructure yourself.
For most applications starting out, GA4 is the way to go - it's free, covers most use cases, and is easy to set up. If you're building a product where understanding user behavior and feature usage is critical, Mixpanel or Amplitude are worth the investment. Choose based on your needs: GA4 for general web analytics, Mixpanel/Amplitude for product analytics, and self-hosted solutions if privacy is a primary concern.
-
Cloud: The cloud has become the de facto standard for modern application deployment. Gone are the days when you'd rent a physical server and manage everything yourself. Nowadays, companies are shifting to cloud platforms for good reasons: scalability (spin up resources when you need them, scale down when you don't), cost-effectiveness (pay only for what you use instead of maintaining idle infrastructure), simplicity of deployment (managed services handle the complexity for you), and reliability (cloud providers handle redundancy, backups, and disaster recovery). Whether you're building a startup or working at an enterprise, you'll almost certainly be working with cloud platforms. Here are the main ones you should know about:
-
AWS (Amazon Web Services): The market leader and the most widely adopted cloud platform. AWS has the largest market share, the most services (over 200), and the biggest ecosystem. If you're working in tech, you'll almost certainly encounter AWS at some point. It's the go-to choice for most companies, especially in the US. AWS excels at scale - it powers everything from startups to Netflix, and has the most mature service offerings. The main downside is complexity - with so many services, it can be overwhelming. But the flip side is that whatever you need, AWS probably has it. If you're starting out and unsure which cloud to learn, AWS is the safest bet because it's the most common in the industry.
-
Google Cloud Platform (GCP): Google's cloud offering, known for its strength in data analytics, machine learning, and Kubernetes. GCP is often preferred by companies that are already in the Google ecosystem or need advanced data processing capabilities. It's generally considered more developer-friendly with better documentation and cleaner interfaces, but it has a smaller market share than AWS. GCP is popular among startups and companies doing heavy data processing or ML workloads. The main downside is that it's less common in enterprise settings compared to AWS, so there are fewer job opportunities. However, if you're interested in ML/AI or data engineering, GCP is worth learning.
-
Microsoft Azure: The enterprise-focused cloud platform, especially strong if you're working in Microsoft-centric environments. Azure is the go-to choice for companies that already use Microsoft products (Office 365, Active Directory, etc.) because it integrates seamlessly. It's also popular in enterprise settings and government contracts. Azure has been growing rapidly and is catching up to AWS in terms of market share. The main downside is that it's less common in startup environments and the developer experience isn't as polished as AWS or GCP. But if you're targeting enterprise roles or already work with Microsoft technologies, Azure is valuable to know.
My recommendation: Get certified in at least one cloud platform. It's not just about the certificate itself - the process of studying for it will force you to actually learn the platform, understand its services, and get hands-on experience. I suggest aiming for associate-level certifications (not entry-level, but not professional-level either):
-
AWS: Either the Solutions Architect Associate (SAA) or the Developer Associate (DVA) certifications. The Solutions Architect focuses on designing systems on AWS, while the Developer Associate focuses on building applications. Both are valuable: SAA is more commonly recognized and covers broader topics, whereas DVA is more focused on doing the stuff you would probably be doing as a developer with AWS.
-
Google Cloud: The Associate Cloud Engineer certification, which is roughly equivalent to AWS associate-level certifications.
With enough practice and focused study, you can get any of these within 3 months, or even less if you're dedicated. The key is hands-on practice - don't just read, actually build things on the platform. Use the free tiers to experiment, deploy actual applications, and break things to understand how they work. The certification will validate your knowledge and can help you stand out, but more importantly, it forces you to actually learn the platform properly.
-
Frontend
- JS, HTML, CSS
- Tailwind, React
- Responsive Design
- Typography, Spacing, Color theory
- Animations, transitions
- Some other stuff I intend to add shortly ✌️
Resources I have found particularly useful:
-
roadmap.sh. Here was the main roadmap I followed when learning on my own. Though now you have this guide as your roadmap 😎 (but I encourage you to still check out roadmap.sh)
-
OSSU Computer Science. Here I learned on my own most of the CS curriculum, much more better and effectively compared to what I was taught at my uni.
-
Frontend Masters. They have a lot of extremelly good courses and content on tech I myself have found to be top-notch.
-
Youtube. Yes, there are a lot of high-quality content for free on youtube on any topic you want.