parent
e69d9b4238
commit
e36a749223
15 changed files with 12364 additions and 73 deletions
File diff suppressed because one or more lines are too long
@ -0,0 +1,45 @@ |
||||
# check_pgactivity |
||||
|
||||
## Check_pgactivity |
||||
|
||||
Check_pgactivity is a popular monitoring tool designed specifically for PostgreSQL. It is an efficient and flexible solution to monitor various aspects of a PostgreSQL database such as connectivity, queries, locks, and other key performance indicators. This tool provides an easy-to-use interface to collect and store PostgreSQL performance data, which makes it a helpful resource for database administrators and developers to keep their databases running efficiently. |
||||
|
||||
### Features |
||||
|
||||
- **Wide range of monitors:** Check_pgactivity offers numerous service checks, including database connections, query durations, transactions, WAL files, Bloat, and much more. This enables users to gain insights on virtually every important aspect of their PostgreSQL environment. |
||||
|
||||
- **Nagios Integration:** The tool seamlessly integrates with Nagios, a widely-used open-source monitoring solution, allowing administrators to include PostgreSQL monitoring into their existing monitoring setup with ease. |
||||
|
||||
- **Flexible output:** Check_pgactivity generates output that is compatible with various monitoring solutions, making it flexible enough to adapt to different systems' requirements. |
||||
|
||||
- **Custom thresholds and alerts:** Users can set specific thresholds and alerts for certain metrics, allowing them to detect potential issues early on and take appropriate action. |
||||
|
||||
- **Perl-based:** Being a Perl script, check_pgactivity is lightweight and easy to integrate into existing tools and workflows. |
||||
|
||||
### Usage |
||||
|
||||
To use check_pgactivity, you will first need to install it on your system. You can download the latest version from the [official repository](https://github.com/OPMDG/check_pgactivity/releases). Ensure that you have the required Perl modules (DBD::Pg and DBI) installed. |
||||
|
||||
Once installed, you can execute the script to perform different monitoring tasks: |
||||
|
||||
``` |
||||
check_pgactivity -s <SERVICE_NAME> -h <HOSTNAME> -U <USERNAME> -p <PORT> -d <DB_NAME> |
||||
``` |
||||
|
||||
Replace the placeholders with appropriate connection details, and choose the desired service check as per your monitoring requirements. For a full list of supported services, refer to the [official documentation](https://github.com/OPMDG/check_pgactivity/blob/master/doc/check_pgactivity.pod). |
||||
|
||||
### Examples |
||||
|
||||
To monitor the number of connections in a PostgreSQL database: |
||||
|
||||
``` |
||||
check_pgactivity -s connections -h localhost -U postgres -p 5432 -d my_database |
||||
``` |
||||
|
||||
To check the oldest transaction: |
||||
|
||||
``` |
||||
check_pgactivity -s oldest_2pc -h localhost -U postgres -p 5432 -d my_database |
||||
``` |
||||
|
||||
In conclusion, check_pgactivity is a powerful and versatile tool that can help you effectively monitor your PostgreSQL databases. By tracking various performance metrics and integrating with other monitoring solutions like Nagios, it provides comprehensive insights into your PostgreSQL environment and allows you to fine-tune and optimize its performance. |
@ -0,0 +1,75 @@ |
||||
# temBoard |
||||
|
||||
## Monitoring with temBoard |
||||
|
||||
In this section, we'll explore a powerful management and monitoring tool for PostgreSQL: `temBoard`. It's a user-friendly, highly adaptable, and open-source web application designed to monitor and manage your database instances efficiently. |
||||
|
||||
### What is temBoard? |
||||
|
||||
`temBoard` is a comprehensive management and monitoring solution for PostgreSQL instances. It provides a real-time, detail-oriented view of databases and their current status allowing administrators to oversee their system efficiently. Key features of temBoard include: |
||||
|
||||
- Real-time monitoring of Key Performance Indicators (KPIs). |
||||
- Historical data analysis with a built-in data retention mechanism. |
||||
- An intuitive and customizable web interface. |
||||
- High-level security with role-based access control and SSL/TLS support. |
||||
- Management of multiple PostgreSQL clusters from one central location. |
||||
- Extensibility through plugins for specific tasks. |
||||
|
||||
### Installing temBoard |
||||
|
||||
You can install temBoard using `pip`, Python's standard package manager. Before installation, you need to install the following dependencies: |
||||
|
||||
1. Python 3.6 or higher: You can install Python from the official website or through your package manager. |
||||
2. PostgreSQL server 9.4 or higher: Your PostgreSQL instance should be compatible with temBoard for full feature support. |
||||
|
||||
Use the following command to install temBoard using `pip`: |
||||
|
||||
``` |
||||
pip install temboard |
||||
``` |
||||
|
||||
### Configuring and Running temBoard |
||||
|
||||
After installation, temBoard needs to be configured properly to start monitoring the PostgreSQL database. Follow these steps to configure temBoard: |
||||
|
||||
1. Create the temBoard configuration file: The default location is `/etc/temboard/temboard.conf`. You can use the following command to create and edit the file: |
||||
|
||||
``` |
||||
sudo mkdir /etc/temboard |
||||
sudo touch /etc/temboard/temboard.conf |
||||
sudo nano /etc/temboard/temboard.conf |
||||
``` |
||||
|
||||
2. Add the following contents to the configuration file and modify the values as needed: |
||||
|
||||
``` |
||||
[temboard] |
||||
address = 0.0.0.0 |
||||
port = 8888 |
||||
ssl_cert_file = /etc/temboard/temboard_SERVER_NAME_chained.pem |
||||
ssl_key_file = /etc/temboard/temboard_SERVER_NAME.key |
||||
[repository] |
||||
host = localhost |
||||
port = 5432 |
||||
user = temboard |
||||
password = temboard_password |
||||
dbname = temboard |
||||
[logging] |
||||
method = stderr |
||||
level = INFO |
||||
format = %(asctime)s [%(levelname)s] %(message)s |
||||
``` |
||||
|
||||
3. Initialize the temBoard repository: Use the following command to initialize the database for temBoard: |
||||
|
||||
``` |
||||
temboard-admin -c /etc/temboard/temboard.conf initialize |
||||
``` |
||||
|
||||
4. Start temBoard as a service: You can start temBoard using the following command: |
||||
|
||||
``` |
||||
temboard -c /etc/temboard/temboard.conf |
||||
``` |
||||
|
||||
After running temBoard, access the web interface using your browser at `https://<your_server_name>:8888/`. You can now monitor and manage your PostgreSQL instances using the temBoard web interface. |
@ -0,0 +1,35 @@ |
||||
# check_pgbackrest |
||||
|
||||
## Check pgBackRest |
||||
|
||||
In this section, we'll discuss the importance of monitoring your PostgreSQL backup and recovery solution, specifically focusing on `check pgBackRest`. `pgBackRest` is a widely-used backup tool for PostgreSQL databases, providing features like full, differential, incremental and archive backups, support for multiple repostories and threaded backup/restore processes. |
||||
|
||||
### Why should you monitor pgBackRest? |
||||
|
||||
Monitoring `pgBackRest` helps ensure that your PostgreSQL backups are consistent, up-to-date, and free from any potential issues. By regularly checking your backups, you'll be able to maintain a reliable and efficient backup-restore process for your PostgreSQL database. |
||||
|
||||
### How to check pgBackRest? |
||||
|
||||
`pgBackRest` provides a built-in command called `check` which performs various checks to validate your repository and configuration settings. The command is executed as follows: |
||||
|
||||
```sh |
||||
pgbackrest --stanza=<stanza_name> check |
||||
``` |
||||
|
||||
`<stanza_name>` should be replaced with the name of the stanza for which you want to verify the repository and configuration settings. |
||||
|
||||
### What does the check command do? |
||||
|
||||
When you run `check pgBackRest`, it performs the following tasks: |
||||
|
||||
1. **Configuration validation**: It verifies if the configuration file (`pgbackrest.conf`) contains valid settings and if the runtime parameters are properly set. |
||||
|
||||
2. **Backup consistency**: It checks the consistency of backup files within the stanza, ensuring that there are no missing or incomplete backups. |
||||
|
||||
3. **Archive validation**: It examines the state of WAL archive files, ensuring that they are present and retrievable as per the minimum and maximum settings specified in the configuration. |
||||
|
||||
4. **Remote connectivity**: If any remote repositories are configured, it checks the connectivity to remote hosts and verifies that the repository paths are accessible. |
||||
|
||||
### Conclusion |
||||
|
||||
Regularly monitoring and checking `pgBackRest` is essential for maintaining a reliable backup and recovery solution for your PostgreSQL database. By using the built-in `check` command, you can ensure that your repository and configuration settings are validated, backups are consistent, and archives are available, providing you with peace of mind and making it easier to recover your database in case of any disaster. |
@ -0,0 +1,49 @@ |
||||
# PostgreSQL Anonymizer |
||||
|
||||
PostgreSQL Anonymizer is an extension that helps you protect sensitive data by anonymizing and obfuscating it. It is essential for meeting privacy regulations and ensuring the security of personal information contained in your databases. |
||||
|
||||
## Key Features |
||||
|
||||
- **Dynamic Masking**: With dynamic masking, you can create specific views that display anonymized data. Therefore, you can have the real data in the underlying tables but only reveal necessary masked data to users or applications. |
||||
|
||||
```sql |
||||
CREATE MASKED VIEW masked_clients AS SELECT * FROM clients; |
||||
SELECT anon.mask_data('clients', 'masked_clients'); |
||||
``` |
||||
- **In-Place Anonymization**: You can also anonymize data in place, making the change permanent. This method is useful when you need to share databases between environments, such as testing and development, but want to ensure privacy. |
||||
|
||||
```sql |
||||
SELECT anon.anonymize('clients'); |
||||
``` |
||||
- **Extensible and Customizable Functions**: You can define your own anonymization functions, providing great flexibility in how you anonymize data. These custom functions can then be applied to specific columns or tables. |
||||
|
||||
```sql |
||||
CREATE FUNCTION anon_ssn(text) RETURNS text AS |
||||
$$ |
||||
DECLARE |
||||
ssn text := anon.pseudonymize_DISTRIBUTED($1); |
||||
BEGIN |
||||
RETURN substring(ssn for 2) || '-' || substring(ssn from 5 for 2) || '-' || substring(ssn from 8); |
||||
END; |
||||
$$ LANGUAGE plpgsql; |
||||
SELECT anon.set_anonymous_function('clients', 'ssn', 'anon_ssn(text)'); |
||||
``` |
||||
|
||||
## Getting Started |
||||
|
||||
1. Install the PostgreSQL Anonymizer extension: |
||||
```sql |
||||
CREATE EXTENSION IF NOT EXISTS anon CASCADE; |
||||
``` |
||||
2. Define the anonymization methods for each sensitive field in your tables. You can use the built-in functions or create your own. |
||||
```sql |
||||
SELECT anon.set_anonymous_function('clients', 'email', 'anon.email(text)'); |
||||
``` |
||||
3. Apply anonymization using either dynamic masking or in-place methods, depending on your requirements. |
||||
|
||||
## Additional Resources |
||||
|
||||
For further details on PostgreSQL Anonymizer, consult the following resources: |
||||
|
||||
- [Official PostgreSQL Anonymizer Documentation](https://postgresql-anonymizer.readthedocs.io/) |
||||
- [GitHub Repository](https://gitlab.com/dalibo/postgresql_anonymizer) |
@ -0,0 +1,29 @@ |
||||
# Anonymization |
||||
|
||||
Anonymization is the process of protecting sensitive and personally identifiable information (PII) from being exposed, by replacing or changing the data in a way that it becomes impossible or extremely difficult to trace back to its original source. In the context of PostgreSQL, anonymization techniques are used to ensure the confidentiality and privacy of the data, while still making it available to perform analysis or testing. |
||||
|
||||
### Why is anonymization important? |
||||
|
||||
Anonymization has become a critical aspect of databasing due to the growing need for data protection and compliance with privacy regulations like GDPR, HIPAA, and CCPA. The consequences of non-compliance can result in fines, damage to brand reputation, and potential legal battles. |
||||
|
||||
### Techniques for anonymizing data in PostgreSQL |
||||
|
||||
1. **Data Masking**: Replacing sensitive information with random characters or numbers to make it unrecognizable. For example, replacing a person's name with random letters. |
||||
|
||||
2. **Generalization**: Aggregating data to a higher level of abstraction, such as converting exact ages to age groups or locations to regions. This will allow you to analyze the data at a higher level without compromising individual privacy. |
||||
|
||||
3. **Pseudonymization**: Replacing sensitive information with synthetic substitutes, while maintaining a mapping of the original data to the pseudonyms. This allows data to still be useful for analysis purposes but protects identifiable information. |
||||
|
||||
4. **Data Swapping**: Interchanging some sensitive data between records to create a level of ambiguity on the true data combination. For example, swapping salaries of some employees within a company. |
||||
|
||||
5. **Random Noise Addition**: Adding random noise to the data elements in a dataset, thus making it more difficult to identify individual records. |
||||
|
||||
### Tools for anonymizing data in PostgreSQL |
||||
|
||||
1. **pg_anonymize**: It's a PostgreSQL extension that can be used to mask and anonymize data. It can generate fake data, mask existing data or shuffle data between rows. |
||||
|
||||
2. **anon**: A PostgreSQL extension that offers built-in anonymization functions, like data masking, randomizing and anonymization with k-anonymity. |
||||
|
||||
3. **Data Masker**: A commercial solution that offers tools to mask and pseudonymize sensitive data according to your specific requirements. |
||||
|
||||
In conclusion, anonymization is an essential skill in any PostgreSQL infrastructure, aiming to protect sensitive and personally identifiable information. Implementing anonymization techniques will enable your organization to comply with data protection regulations and maintain the privacy of individuals, while still enabling you to analyze the patterns and trends in your data. |
@ -0,0 +1,39 @@ |
||||
# Terraform |
||||
|
||||
Terraform is an Infrastructure as Code (IaC) tool developed by HashiCorp that allows you to streamline and automate the process of managing your infrastructure. With Terraform, you can define, provision, and manage resources like virtual machines, storage accounts, and networking resources using a declarative language called HashiCorp Configuration Language (HCL). You can also use JSON as an alternative to HCL, but HCL is more suitable for human-readable configuration. |
||||
|
||||
### Advantages of Terraform |
||||
|
||||
1. **Platform Agnostic**: Terraform supports a variety of cloud providers like AWS, Google Cloud, Azure, and many more, allowing you to manage multi-cloud deployments seamlessly. |
||||
|
||||
2. **Version Control**: By maintaining your infrastructure using code, you can leverage the power of version control systems like Git. This enables seamless collaboration, better understanding of changes, and the ability to roll back when needed. |
||||
|
||||
3. **Modularity**: Terraform promotes modular and reusable code, which simplifies the process of managing complex infrastructure setups. |
||||
|
||||
4. **State Management**: Terraform persists the state of your infrastructure, allowing you to determine real-time configuration and track changes over time. |
||||
|
||||
### Main Components of Terraform |
||||
|
||||
1. **Configuration Files**: These are written in HCL and describe the infrastructure you want to create, update, or delete. |
||||
|
||||
2. **Terraform CLI**: The command-line interface that helps you manage the lifecycle of your infrastructure. |
||||
|
||||
3. **State File**: This file stores the state of your infrastructure and is used by Terraform to determine the changes required during each operation. |
||||
|
||||
4. **Providers**: These are the plugins that integrate Terraform with various cloud providers and services. Some popular providers are AWS, Azure, Google Cloud, and many more. |
||||
|
||||
### Terraform Workflow |
||||
|
||||
The typical workflow when working with Terraform involves four main steps: |
||||
|
||||
1. **Write**: Describe your infrastructure using configuration files. |
||||
|
||||
2. **Initialize**: Run `terraform init` to download required providers and set up the backend for storing your state file. |
||||
|
||||
3. **Plan**: Run `terraform plan` to preview the actions Terraform will take to achieve the desired infrastructure state. |
||||
|
||||
4. **Apply**: Run `terraform apply` to execute the actions in the plan and provision your infrastructure. |
||||
|
||||
Keep in mind that Terraform is highly extensible, supporting custom providers, provisioners, and various third-party tools to make managing your infrastructure even more efficient. |
||||
|
||||
In conclusion, if you're looking to learn automation and improve your administration of PostgreSQL or any other infrastructure, becoming familiar with Terraform is an invaluable asset in your toolkit. |
@ -0,0 +1,29 @@ |
||||
# liquibase, sqitch, Bytebase, ora2pg etc |
||||
|
||||
Migrations are crucial in the lifecycle of database applications. As the application evolves, changes to the database schema and sometimes data itself become necessary. In this section, we will explore four popular migration tools—Liquibase, Sqitch, Bytebase, and Ora2Pg provide you with a brief summary of each. |
||||
|
||||
### Liquibase |
||||
|
||||
[Liquibase](https://www.liquibase.org/) is an open-source database-independent library for tracking, managing, and applying database schema changes. It can be integrated with various build environments, such as Maven or Gradle, and supports multiple database management systems, including PostgreSQL. |
||||
|
||||
Liquibase tracks changes in XML, YAML, JSON, or SQL format and utilizes a changeset to uniquely identify each migration. Some advantages of Liquibase include its robust support for various database platforms and its compatibility with version control systems like Git or SVN. |
||||
|
||||
### Sqitch |
||||
|
||||
[Sqitch](https://sqitch.org/) is another database-agnostic schema change management tool. It does not require a specific file format for migration scripts, allowing developers to work with their preferred language (e.g., PL/pgSQL or PL/Tcl). |
||||
|
||||
Sqitch stores metadata about changes in a separate schema, which makes it easy to understand the relationship between changes and their dependencies. Furthermore, it integrates well with version control systems, making it a popular choice for managing database migrations. |
||||
|
||||
### Bytebase |
||||
|
||||
[Bytebase](https://bytebase.io/) is a web-based, open-source database schema change management tool that plays well with PostgreSQL. It provides a user-friendly interface for managing migrations, collaborating with team members, and tracking the progress of changes across multiple environments. |
||||
|
||||
Bytebase offers features such as schema versioning, pull-request-style reviews, and automated deployment. Its intuitive interface and collaborative features make it an excellent choice for teams with non-technical users or organizations looking for more control over their migration process. |
||||
|
||||
### Ora2Pg |
||||
|
||||
[Ora2Pg](https://ora2pg.darold.net/) is a specific migration tool designed to facilitate the migration of Oracle database schemas and data to PostgreSQL. It provides support for various schema objects, including tables, indexes, sequences, views, and more. |
||||
|
||||
Ora2Pg can export schema information in various formats, including SQL or PL/pgSQL, and generate migration scripts to ease the transition from Oracle to PostgreSQL. If you're planning to switch from an Oracle database to PostgreSQL, Ora2Pg is a valuable tool to streamline the migration process. |
||||
|
||||
In conclusion, Liquibase, Sqitch, Bytebase, and Ora2Pg are four powerful migration tools that can help you manage your database schema changes in a PostgreSQL environment. By understanding each tool's capabilities, you can select the right one for your specific needs and ensure smooth database migrations throughout your application's lifecycle. |
@ -1,38 +0,0 @@ |
||||
# Liquidbase, Sqitch, & Bytebase |
||||
|
||||
In this section, we'll take a closer look at three popular tools for managing database migrations in PostgreSQL: Liquidbase, Sqitch, & Bytebase. Each tool has its own unique features and way of handling migrations, giving you options to choose the best one that fits your project's requirements. |
||||
|
||||
## Liquidbase |
||||
|
||||
[Liquidbase](https://www.liquibase.org/) is an open-source database-independent library for tracking, managing, and applying database schema changes. It uses a changelog file to keep track of each change applied to the database, ensuring that you can always know the state of your database schema. |
||||
|
||||
### Key Features: |
||||
|
||||
- Supports various databases including PostgreSQL, MySQL, Oracle, and more. |
||||
- Changelog support using XML, JSON, YAML, or SQL formats. |
||||
- Automatically generates rollback statements for applied changes. |
||||
- Supports advanced features such as contexts, labels, and preconditions. |
||||
|
||||
## Sqitch |
||||
|
||||
[Sqitch](https://sqitch.org/) is an open-source tool designed specifically for managing database schema changes, emphasizing simplicity, ease-of-use, and native SQL support. Unlike Liquidbase, Sqitch does not make use of a changelog file, instead focusing on individual migration files (scripts). |
||||
|
||||
### Key Features: |
||||
|
||||
- Native SQL support - write your migrations in pure SQL. |
||||
- No requirement for any special language or DSL. |
||||
- Supports PostgreSQL, MySQL, SQLite, Oracle, and more. |
||||
- Offers a powerful command-line interface (CLI) for managing your migrations. |
||||
|
||||
## Bytebase |
||||
|
||||
[Bytebase](https://bytebase.io/) is a modern, web-based database schema change management and version control tool. Bytebase allows you to manage and track schema changes across multiple environments, streamlining the process of deploying database schema changes. |
||||
|
||||
### Key Features: |
||||
|
||||
- Web-based UI for managing and tracking schema changes. |
||||
- Supports PostgreSQL, MySQL, and SQLite. |
||||
- Schema change review and approval workflows. |
||||
- Integrates with popular version control systems like GitHub, GitLab, and Bitbucket. |
||||
|
||||
In summary, Liquidbase, Sqitch, and Bytebase are all great options for managing migrations in PostgreSQL. Each tool offers unique features and approaches to handling migrations, allowing you to pick the one that best fits your project's architecture and requirements. The key is to choose the right tool based on your team's preferences, development processes, and the specific needs of your application's database schema. |
@ -1,33 +0,0 @@ |
||||
# Data Partitioning and Sharding Patterns |
||||
|
||||
In this section, we will discuss data partitioning and sharding patterns in PostgreSQL. When dealing with big datasets or high-throughput applications, it is essential to distribute the data across multiple databases or servers to achieve better performance, scalability, and maintainability. |
||||
|
||||
## Data Partitioning |
||||
|
||||
Data partitioning is a technique that divides a large table into smaller, more manageable pieces called partitions. Each partition is a smaller table that stores a subset of the data, usually based on specific criteria such as ranges, lists, or hashes. Partitioning can improve query performance, simplifies data maintenance tasks, and optimizes resource utilization. |
||||
|
||||
PostgreSQL supports different partitioning methods, such as: |
||||
|
||||
- **Range Partitioning:** The data in a range-partitioned table is separated into partitions based on a specified range of values for a given column. For example, orders could be partitioned by date range, with each partition containing orders within a specific date interval. |
||||
|
||||
- **List Partitioning:** The data in a list-partitioned table is separated into partitions based on specified discrete sets of values for a given column. For example, customers could be partitioned by their country, with each partition storing customers from a specific country. |
||||
|
||||
- **Hash Partitioning:** The data in a hash-partitioned table is divided into partitions using a hash function applied to one or more columns. This method distributes data uniformly across all partitions, which helps in load balancing and parallel query processing. For example, products could be hash partitioned based on the product ID. |
||||
|
||||
For more information on partitioning in PostgreSQL, refer to the [official documentation](https://www.postgresql.org/docs/current/ddl-partitioning.html). |
||||
|
||||
## Sharding |
||||
|
||||
Sharding is a technique that splits a large dataset across multiple database instances or servers, called shards. Each shard is an independent and self-contained unit that holds a portion of the overall data, and shards can be distributed across different geographical locations or infrastructures. |
||||
|
||||
In PostgreSQL environment, sharding can be achieved in different ways: |
||||
|
||||
- **Sharding at the application level:** The application defines the logic to decide which shard will store a specific data record. The application communicates directly with each shard for querying or modifying the data. |
||||
|
||||
- **Sharding using foreign data wrappers:** PostgreSQL provides a feature called foreign data wrappers (FDW) that allows a PostgreSQL server to access data stored in remote servers, treating them as local tables. By using this technique, the data can be sharded across multiple remote servers, and the local PostgreSQL instance acts as a coordinator for accessing these shards. |
||||
|
||||
- **Sharding using 3rd-party tools:** Several 3rd-party tools, such as Pgpool-II, Citus, and PLProxy, can be used for sharding purpose. These tools handle connection pooling, load balancing, and data distribution across multiple PostgreSQL instances. The choice of tools depends on the requirements, complexity, and the desired level of control over the sharding logic. |
||||
|
||||
For more information on sharding in PostgreSQL, refer to this [comprehensive guide](https://www.citusdata.com/blog/2017/07/31/sharding-in-postgresql/). |
||||
|
||||
Implementing data partitioning or sharding requires careful planning and analysis of data distribution, query patterns, and system resources. Balancing the trade-offs of manageability, performance, and scalability is crucial for a successful implementation. |
@ -0,0 +1,13 @@ |
||||
# Data Partitioning |
||||
|
||||
Data partitioning is a technique that divides a large table into smaller, more manageable pieces called partitions. Each partition is a smaller table that stores a subset of the data, usually based on specific criteria such as ranges, lists, or hashes. Partitioning can improve query performance, simplifies data maintenance tasks, and optimizes resource utilization. |
||||
|
||||
PostgreSQL supports different partitioning methods, such as: |
||||
|
||||
- **Range Partitioning:** The data in a range-partitioned table is separated into partitions based on a specified range of values for a given column. For example, orders could be partitioned by date range, with each partition containing orders within a specific date interval. |
||||
|
||||
- **List Partitioning:** The data in a list-partitioned table is separated into partitions based on specified discrete sets of values for a given column. For example, customers could be partitioned by their country, with each partition storing customers from a specific country. |
||||
|
||||
- **Hash Partitioning:** The data in a hash-partitioned table is divided into partitions using a hash function applied to one or more columns. This method distributes data uniformly across all partitions, which helps in load balancing and parallel query processing. For example, products could be hash partitioned based on the product ID. |
||||
|
||||
For more information on partitioning in PostgreSQL, refer to the [official documentation](https://www.postgresql.org/docs/current/ddl-partitioning.html). |
@ -0,0 +1,11 @@ |
||||
# Sharding Patterns |
||||
|
||||
Sharding is a technique that splits a large dataset across multiple database instances or servers, called shards. Each shard is an independent and self-contained unit that holds a portion of the overall data, and shards can be distributed across different geographical locations or infrastructures. |
||||
|
||||
In PostgreSQL environment, sharding can be achieved in different ways: |
||||
|
||||
- **Sharding at the application level:** The application defines the logic to decide which shard will store a specific data record. The application communicates directly with each shard for querying or modifying the data. |
||||
|
||||
- **Sharding using foreign data wrappers:** PostgreSQL provides a feature called foreign data wrappers (FDW) that allows a PostgreSQL server to access data stored in remote servers, treating them as local tables. By using this technique, the data can be sharded across multiple remote servers, and the local PostgreSQL instance acts as a coordinator for accessing these shards. |
||||
|
||||
- **Sharding using 3rd-party tools:** Several 3rd-party tools, such as Pgpool-II, Citus, and PLProxy, can be used for sharding purpose. These tools handle connection pooling, load balancing, and data distribution across multiple PostgreSQL instances. The choice of tools depends on the requirements, complexity, and the desired level of control over the sharding logic. |
@ -0,0 +1,5 @@ |
||||
# explain.dalibo.com |
||||
|
||||
explain.dalibo.com is a free service that allows you to analyze the execution plan of your queries. It is based on the [explain.depesz.com](explain.depesz.com) service. |
||||
|
||||
- [explain.dalibo.com](https://explain.dalibo.com/) |
@ -0,0 +1,39 @@ |
||||
# pgCluu |
||||
|
||||
PgCluu is a powerful and easy-to-use PostgreSQL performance monitoring and tuning tool. This open-source program collects statistics and provides various metrics in order to analyze PostgreSQL databases, helping you discover performance bottlenecks and optimize your cluster's performance. |
||||
|
||||
## Key Features |
||||
|
||||
- Collects and analyzes PostgreSQL log files and system statistics. |
||||
- Provides real-time monitoring and reports with insights into various aspects, such as queries, locks, indexes, tablespaces, connections, and more. |
||||
- Offers customizable graphs for visualizing performance data. |
||||
|
||||
## Installation and Usage |
||||
|
||||
To install PgCluu, follow these steps: |
||||
|
||||
- Install the required dependencies: |
||||
```bash |
||||
sudo apt-get install perl libdbi-perl libdbd-pg-perl libpg-perl libjson-perl rrdtool librrds-perl |
||||
``` |
||||
- Download and extract the latest PgCluu release from [the official GitHub repository](https://github.com/darold/pgcluu/releases): |
||||
```bash |
||||
wget https://github.com/darold/pgcluu/archive/refs/tags/v3.1.tar.gz |
||||
tar xzf v3.1.tar.gz |
||||
``` |
||||
- Run the PgCluu collector to collect statistics: |
||||
```bash |
||||
cd pgcluu-3.1/bin |
||||
./pgcluu_collectd -D /path/to/output_directory -S [interval_seconds] -W [history_days] -C /path/to/pgcluu.conf |
||||
``` |
||||
- Generate the report using the collected data: |
||||
```bash |
||||
./pgcluu -o /path/to/report_directory /path/to/output_directory |
||||
``` |
||||
- Serve the report using a web server or browse the generated HTML files directly. |
||||
|
||||
## Configuration |
||||
|
||||
Before running the PgCluu collector (`pgcluu_collectd`), you can configure the `pgcluu.conf` file by providing the appropriate values for your PostgreSQL cluster, such as hostname, port number, database name, and login credentials. |
||||
|
||||
Apart from PostgreSQL-specific settings, you can also tweak other options, such as the RRDtool's data file format (JPG or SVG), time range for graphs, and more. |
Loading…
Reference in new issue