docs: fix typos and improve grammar in documentation (#7747)

Corrects typos and grammatical errors in various markdown files to enhance clarity and readability.
pull/7791/head
garyellow 2 weeks ago committed by Kamran Ahmed
parent 6fb4218205
commit 6446603cba
  1. 2
      src/data/question-groups/devops/content/blue-green-deployment.md
  2. 4
      src/data/question-groups/devops/content/cicd-setup.md
  3. 2
      src/data/question-groups/devops/content/container-vs-vm.md
  4. 4
      src/data/question-groups/devops/content/devsecops.md
  5. 2
      src/data/question-groups/devops/content/load-balancer.md
  6. 2
      src/data/question-groups/devops/content/migrate-environment.md
  7. 2
      src/data/question-groups/devops/content/optimize-cicd.md
  8. 2
      src/data/question-groups/devops/content/reverse-proxy.md
  9. 2
      src/data/question-groups/devops/content/role-of-devops.md
  10. 2
      src/data/question-groups/devops/content/what-is-docker.md
  11. 2
      src/data/question-groups/devops/content/what-is-version-control.md
  12. 4
      src/data/question-groups/devops/devops.md

@ -6,7 +6,7 @@ At a high level, the way this process works is as follows:
- **Setup Two Environments**: Prepare two identical environments: blue (current live environment) and green (new version environment). - **Setup Two Environments**: Prepare two identical environments: blue (current live environment) and green (new version environment).
- **Deploy to Green**: Deploy the new version of the application to the green environment through your normal CI/CD pipelines. - **Deploy to Green**: Deploy the new version of the application to the green environment through your normal CI/CD pipelines.
- **Testing green**: Perform testing and validation in the green environment to ensure the new version works as expected. - **Test green**: Perform testing and validation in the green environment to ensure the new version works as expected.
- **Switch Traffic**: Once the green environment is verified, switch the production traffic from blue to green. Optionally, the traffic switch can be done gradually to avoid potential problems from affecting all users immediately. - **Switch Traffic**: Once the green environment is verified, switch the production traffic from blue to green. Optionally, the traffic switch can be done gradually to avoid potential problems from affecting all users immediately.
- **Monitor**: Monitor the green environment to ensure it operates correctly with live traffic. Take your time, and make sure you’ve monitored every single major event before issuing the “green light”. - **Monitor**: Monitor the green environment to ensure it operates correctly with live traffic. Take your time, and make sure you’ve monitored every single major event before issuing the “green light”.
- **Fallback Plan**: Keep the blue environment intact as a fallback. If any issues arise in the green environment, you can quickly switch traffic back to the blue environment. This is one of the fastest rollbacks you’ll experience in deployment and release management. - **Fallback Plan**: Keep the blue environment intact as a fallback. If any issues arise in the green environment, you can quickly switch traffic back to the blue environment. This is one of the fastest rollbacks you’ll experience in deployment and release management.

@ -3,7 +3,7 @@ Setting up a CI/CD pipeline from scratch involves several steps. Assuming you’
1. **Set up the Continuous Integration (CI)**: 1. **Set up the Continuous Integration (CI)**:
- Select a continuous integration tool (there are many, like Jenkins, GitLab CI, CircleCI, pick one). - Select a continuous integration tool (there are many, like Jenkins, GitLab CI, CircleCI, pick one).
- Connect the CI tool to your version control system. - Connect the CI tool to your version control system.
- Write a build script that defines the build process, including steps like code checkout, dependencies installation, compiling the code, and running tests. - Write a build script that defines the build process, including steps like code checkout, dependency installation, compiling the code, and running tests.
- Set up automated testing to run on every code commit or pull request. - Set up automated testing to run on every code commit or pull request.
2. **Artifact Storage**: 2. **Artifact Storage**:
@ -27,6 +27,6 @@ Remember that this system should be able to pull the artifacts from the continuo
6. **Security and Compliance**: 6. **Security and Compliance**:
- By now, it’s a good idea to think about integrating security scanning tools into your pipeline (e.g., Snyk, OWASP Dependency-Check). - By now, it’s a good idea to think about integrating security scanning tools into your pipeline (e.g., Snyk, OWASP Dependency-Check).
- nsure compliance with relevant standards and practices depending on your specific project’s needs. - Ensure compliance with relevant standards and practices depending on your specific project’s needs.
Additionally, as a good practice, you might also want to document the CI/CD process, pipeline configuration, and deployment steps. This is to train new team members on using and maintaining the pipelines you just created. Additionally, as a good practice, you might also want to document the CI/CD process, pipeline configuration, and deployment steps. This is to train new team members on using and maintaining the pipelines you just created.

@ -1,6 +1,6 @@
A container is a runtime instance of a container image (which is a lightweight, executable package that includes everything needed to run your code). It is the execution environment that runs the application or service defined by the container image. A container is a runtime instance of a container image (which is a lightweight, executable package that includes everything needed to run your code). It is the execution environment that runs the application or service defined by the container image.
When a container is started, it becomes an isolated process on the host machine with its own filesystem, network interfaces, and other resources. When a container is started, it becomes an isolated process on the host machine with its own filesystem, network interfaces, and other resources.
Containers share the host operating system's kernel, making them more efficient and faster to start than virtual machines. Containers share the host operating system's kernel, making them more efficient and quicker to start than virtual machines.
A virtual machine (VM), on the other hand, is an emulation of a physical computer. Each VM runs a full operating system and has virtualized hardware, which makes them more resource-intensive and slower to start compared to containers. A virtual machine (VM), on the other hand, is an emulation of a physical computer. Each VM runs a full operating system and has virtualized hardware, which makes them more resource-intensive and slower to start compared to containers.

@ -1,4 +1,4 @@
To implement security in a DevOps pipeline (DevSecOps), you should integrate security practices throughout the development and deployment process. This is not just about securing the app once it’s in production, this is about securing the entire app-creation process. To implement security in a DevOps pipeline (DevSecOps), you should integrate security practices throughout the development and deployment process. This is not just about securing the app once it’s in production, this is about securing the entire application-creation process.
That includes: That includes:
@ -7,5 +7,5 @@ That includes:
3. **Continuous Monitoring**: Monitor the pipeline and the deployed applications for security incidents using tools like Prometheus, Grafana, and specialized security monitoring tools. 3. **Continuous Monitoring**: Monitor the pipeline and the deployed applications for security incidents using tools like Prometheus, Grafana, and specialized security monitoring tools.
4. **Infrastructure as Code - Security**: Ensure that infrastructure configurations defined in code are secure by scanning IaC templates (like Terraform) for misconfigurations and vulnerabilities (like hardcoded passwords). 4. **Infrastructure as Code - Security**: Ensure that infrastructure configurations defined in code are secure by scanning IaC templates (like Terraform) for misconfigurations and vulnerabilities (like hardcoded passwords).
5. **Access Control**: Implement strict access controls, using something like role-based access control (RBAC) or ABAC (attribute-based access control) and enforcing the principle of least privilege across the pipeline. 5. **Access Control**: Implement strict access controls, using something like role-based access control (RBAC) or ABAC (attribute-based access control) and enforcing the principle of least privilege across the pipeline.
6. **Compliance Checks**: Figure out the compliance and regulations of your industry and integrate those checks to ensure the pipeline adheres to industry standards and regulatory requirements. 6. **Compliance Checks**: Figure out the compliance requirements and regulations of your industry and integrate those checks to ensure the pipeline adheres to industry standards and regulatory requirements.
7. **Incident Response**: Figure out a clear incident response plan and integrate security alerts into the pipeline to quickly address potential security breaches. 7. **Incident Response**: Figure out a clear incident response plan and integrate security alerts into the pipeline to quickly address potential security breaches.

@ -4,4 +4,4 @@ A load balancer is a device or software that distributes incoming network traffi
It is important because it improves the availability, reliability, and performance of applications by evenly distributing the load, preventing server overload, and providing failover capabilities in case of server failures. It is important because it improves the availability, reliability, and performance of applications by evenly distributing the load, preventing server overload, and providing failover capabilities in case of server failures.
Load balancers are usually used when scaling up RESTful microservices, because given their stateless nature, you can set up multiple copies of the same one behind a load balancer and let it distribute the load amongst all copies evenly. Load balancers are usually used when scaling up RESTful microservices, as their stateless nature, you can set up multiple copies of the same one behind a load balancer and let it distribute the load amongst all copies evenly.

@ -3,6 +3,6 @@ To migrate an existing application into a containerized environment, you’ll ne
1. Figure out what parts of the application need to be containerized together. 1. Figure out what parts of the application need to be containerized together.
2. Create your Dockerfiles and define the entire architecture in that configuration, including the interservice dependencies that there might be. 2. Create your Dockerfiles and define the entire architecture in that configuration, including the interservice dependencies that there might be.
3. Figure out if you also need to containerize any external dependency, such as a database. If you do, add that to the Dockerfile. 3. Figure out if you also need to containerize any external dependency, such as a database. If you do, add that to the Dockerfile.
4. Build the actual docker image. 4. Build the actual Docker image.
5. Once you make sure it runs locally, configure the orchestration tool you use to manage the containers. 5. Once you make sure it runs locally, configure the orchestration tool you use to manage the containers.
6. You’re now ready to deploy to production, however, make sure you keep monitoring and alerting on any problem shortly after the deployment in case you need to roll back. 6. You’re now ready to deploy to production, however, make sure you keep monitoring and alerting on any problem shortly after the deployment in case you need to roll back.

@ -5,5 +5,5 @@ There are many ways in which you can optimize a CI/CD pipeline for performance a
3. **Incremental Builds**: Implement incremental builds that only rebuild parts of the codebase that have changed, rather than the entire project. This is especially useful for large projects with big codebases. 3. **Incremental Builds**: Implement incremental builds that only rebuild parts of the codebase that have changed, rather than the entire project. This is especially useful for large projects with big codebases.
4. **Efficient Testing**: Prioritize and parallelize tests, running faster unit tests early and reserving more intensive integration or end-to-end tests for later stages. Be smart about it and use test impact analysis to only run tests affected by recent code changes. 4. **Efficient Testing**: Prioritize and parallelize tests, running faster unit tests early and reserving more intensive integration or end-to-end tests for later stages. Be smart about it and use test impact analysis to only run tests affected by recent code changes.
5. **Monitor Pipeline Health**: Continuously monitor the pipeline for bottlenecks, failures, and performance issues. Use metrics and logs to identify and address inefficiencies. 5. **Monitor Pipeline Health**: Continuously monitor the pipeline for bottlenecks, failures, and performance issues. Use metrics and logs to identify and address inefficiencies.
6. **Environment Consistency**: Ensure that build, test, and production environments are consistent to avoid "it works on my machine" issues. Use containerization or Infrastructure as Code (IaC) to maintain environment parity. Your code should work in all environments, and if it doesn’t, it should not be the fault of the environment. 6. **Environment Consistency**: Ensure that build, test, and production environments are consistent to avoid "It works on my machine" issues. Use containerization or Infrastructure as Code (IaC) to maintain environment parity. Your code should work in all environments, and if it doesn’t, it should not be the fault of the environment.
7. **Pipeline Stages**: Use pipeline stages wisely to catch issues early. For example, fail fast on linting or static code analysis before moving on to more resource-intensive stages. 7. **Pipeline Stages**: Use pipeline stages wisely to catch issues early. For example, fail fast on linting or static code analysis before moving on to more resource-intensive stages.

@ -1,4 +1,4 @@
![Reverse Procy Explained](https://assets.roadmap.sh/guest/reverse-proxy-explained-t12iw.png) ![Reverse Proxy Explained](https://assets.roadmap.sh/guest/reverse-proxy-explained-t12iw.png)
A reverse proxy is a piece of software that sits between clients and backend servers, forwarding client requests to the appropriate server and returning the server's response to the client. It helps with load balancing, security, caching, and handling SSL termination. A reverse proxy is a piece of software that sits between clients and backend servers, forwarding client requests to the appropriate server and returning the server's response to the client. It helps with load balancing, security, caching, and handling SSL termination.

@ -9,7 +9,7 @@ With that said other key responsibilities may include:
- Implementing and managing CI/CD pipelines. - Implementing and managing CI/CD pipelines.
- Automating infrastructure provisioning and configuration using IaC tools. - Automating infrastructure provisioning and configuration using IaC tools.
- Monitoring and maintaining system performance, security, and availability. - Monitoring and maintaining system performance, security, and availability.
- Collaborating with developers to streamline code deployments and ensure smooth operations. - Collaborating with developers to streamline code deployments and ensures smooth operations.
- Managing and optimizing cloud infrastructure. - Managing and optimizing cloud infrastructure.
- Ensuring system scalability and reliability. - Ensuring system scalability and reliability.
- Troubleshooting and resolving issues across the development and production environments. - Troubleshooting and resolving issues across the development and production environments.

@ -2,7 +2,7 @@ Docker is an open-source platform that enables developers to create, deploy, and
That, in turn, ensures that the application can run consistently across various computing environments. That, in turn, ensures that the application can run consistently across various computing environments.
Docker has become one of the most popular DevOps tools because it provides a consistent and isolated environment for development, continuous testing, and deployment. This consistency helps to eliminate the common "it works on my machine" problem by ensuring that the application behaves the same way, regardless of where it is run—whether on a developer's local machine, a testing server, or in production. Docker has become one of the most popular DevOps tools because it provides a consistent and isolated environment for development, continuous testing, and deployment. This consistency helps to eliminate the common "It works on my machine" problem by ensuring that the application behaves the same way, regardless of where it is run—whether on a developer's local machine, a testing server, or in production.
Additionally, Docker simplifies the management of complex applications by allowing developers to break them down into smaller, manageable microservices, each running in its own container. Additionally, Docker simplifies the management of complex applications by allowing developers to break them down into smaller, manageable microservices, each running in its own container.

@ -4,4 +4,4 @@ It is important in DevOps because it allows multiple team members to collaborate
In terms of tooling, one of the best and most popular version control systems is Git. It provides what is known as a distributed version control system, giving every team member a piece of the code so they can branch it, work on it however they feel like it, and push it back to the rest of the team once they’re done. In terms of tooling, one of the best and most popular version control systems is Git. It provides what is known as a distributed version control system, giving every team member a piece of the code so they can branch it, work on it however they feel like it, and push it back to the rest of the team once they’re done.
That said, there are other legacy teams using alternatives like CSV or SVN. That said, there are other legacy teams using alternatives like CVS or SVN.

@ -226,9 +226,9 @@ questions:
The evolution of technology and practices, coupled with the increase in complexity of the systems we develop, make the role of DevOps more relevant by the day. The evolution of technology and practices, coupled with the increase in complexity of the systems we develop, make the role of DevOps more relevant by the day.
But becoming a successful DevOps is not a trivial task, especially because this role is usually the evolution of a developer looking to get more involved in other related ops areas or someone from ops who’s starting to get more directly involved in the development space. But becoming a successful DevOps engineer is not a trivial task, especially because this role is usually the evolution of a developer looking to get more involved in other related ops areas or someone from ops who’s starting to get more directly involved in the development space.
Either way, DevOps engineers live between the development and operations teams, understanding enough about each area to be able to work towards improving their interactions. Either way, DevOps engineers work between the development and operations teams, understanding enough about each area to be able to work towards improving their interactions.
Because of this strange situation, while detailed roadmaps (be sure to check out our DevOps roadmap!) help a lot, getting ready for a DevOps interview requires a lot of work. Because of this strange situation, while detailed roadmaps (be sure to check out our DevOps roadmap!) help a lot, getting ready for a DevOps interview requires a lot of work.

Loading…
Cancel
Save