Tuesday, November 28, 2023

Cloud - Spring Boot Cloud

Spring Boot Cloud


Spring Boot Cloud is a framework that provides a set of tools and libraries that make it easier to build cloud-native applications. It does this by providing abstractions for common cloud patterns, such as service discovery, load balancing, and distributed tracing. This can help you to develop and deploy applications that are more scalable, resilient, and easier to manage.

Spring Boot Cloud is a framework that provides a set of tools and libraries for building cloud-native applications. It simplifies the development and deployment of microservices, and it allows you to connect your applications to a variety of cloud services.

Key Features of Spring Boot Cloud:

Service Discovery: 

Spring Cloud provides a variety of service discovery mechanisms, such as Eureka and Consul, that allow your applications to find each other. This makes it easy to scale your applications horizontally.

Load Balancing: 

Spring Cloud also provides load balancing mechanisms, such as Ribbon and Spring Cloud Circuit Breaker, that allow you to distribute traffic across your application instances. This improves the performance of your applications.

Distributed Tracing: 

Spring Cloud Sleuth and Zipkin can be used to trace requests across multiple microservices. This helps you identify and troubleshoot performance problems.

Config Server: 

Spring Cloud Config Server allows you to manage application configuration in a central location. This makes it easier to deploy and manage your applications.

Containerization: 

Spring Boot Cloud can be used to build cloud-native applications that are containerized using Docker. This makes it easy to deploy your applications to a variety of cloud platforms.

Benefits of Spring Boot Cloud

Reduced Development Time: 

Spring Boot Cloud can save you a lot of time and effort by providing pre-built components for cloud development.

Improved Scalability: 

Spring Boot Cloud makes it easy to scale your applications horizontally, which improves their performance and resilience.

Simplified Deployment: 

Spring Boot Cloud can simplify the deployment of your applications to a variety of cloud platforms.

Reduced Operating Costs: 

Spring Boot Cloud can help you reduce your operating costs by making it easier to manage and maintain your applications.

Here are some of the most popular Spring Boot Cloud projects:

Spring Cloud OpenFeign:

 A declarative web service client library that makes it easy to call remote APIs.

Spring Cloud Gateway: 

A high-performance, low-latency API gateway that provides routing, service discovery, load balancing, and security.

Spring Cloud Stream: 

A framework for building microservices that communicate with each other using message queues.

Spring Cloud Data Flow:

A platform for building and managing stream and batch data processing pipelines.

Spring Cloud Function: 

A framework for building and running serverless functions.

If you are developing cloud-native applications, then Spring Boot Cloud is a valuable tool that can help you save time, reduce costs, and improve the performance of your applications.

Thursday, November 23, 2023

Database Concepts

Inner Joins



An inner join combines rows from two tables based on matching values in both tables. It returns only rows that have matching values in both tables.

Customers Table

Orders Table


Join Customers and Orders Tables


Left Outer Joins



A left join returns all rows from the left table and the matched rows from the right table. If there are no matching rows in the right table, the corresponding columns will be filled with NULL values.

Customers Table

Orders Table


Left Outer Join



Right Outer Joins



A right join returns all rows from the right table and the matched rows from the left table. If there are no matching rows in the left table, the corresponding columns will be filled with NULL values.

Customers Table

Orders Table

Right Outer Join


Full Joins



A full join returns all rows from both tables, regardless of whether there are matching rows in the other table. If there are no matching rows in the other table, the corresponding columns will be filled with NULL values.

Customer Table

Orders Table

Full Join


Null IDs appear above the last rows because of the way full joins work. A full join returns all rows from both tables, regardless of whether there is a matching row in the other table. This means that the full join includes rows from the left table (customer table) that do not have matching rows in the right table (order table). These rows are represented by NULL values in the order_id and order_date columns.

In this specific example, there are three rows in the full join that have NULL IDs. These rows represent customers who have not placed any orders. The NULL IDs are simply placeholders to indicate that there is no corresponding order for these customers.

Inner & Outer Joins


Inner and outer joins are the two main types of joins. An inner join returns only rows that have matching values in both tables, while an outer join returns all rows from one table and the matched rows from the other table.

Customer Table

Orders Table

Inner Join

Left Outer Join
Right Outer Join

Full Outer Join Only



Customer Table



Orders Table



Full Outer Join


Rows with Matching Counterparts


Why Transactions

Transactions are used to ensure data integrity in a database. A transaction is a series of operations that must all succeed or fail. If any operation in the transaction fails, the entire transaction is rolled back, and the database is returned to its state before the transaction begins.

Begin, Commit, and Rollback

The BEGIN, COMMIT, and ROLLBACK statements are used to control transactions. 

The BEGIN statement starts a new transaction. 

The COMMIT statement commits the transaction, making the changes to the database permanent. 

The ROLLBACK statement rolls back the transaction, undoing any changes that were made.



Nested Transactions


Nested transactions are transactions that are contained within other transactions. Nested transactions allow you to group related operations together and roll them back independently of the outer transaction.

Primary Keys and Indexes


Primary keys and indexes are used to improve the performance of database queries. A primary key is a unique identifier for a row in a table. An index is a data structure that allows you to quickly find rows in a table by a particular value.


View Indexes through Terminal


You can view indexes for a table through the terminal using the SHOW INDEXES statement. This statement will show you the name of the index, the columns that are indexed, and the type of index.

Create and Drop Indexes


You can create and drop indexes using the CREATE INDEX and DROP INDEX statements. The CREATE INDEX statement creates an index on a table. The DROP INDEX statement drops an index from a table.

Indexes in action


Indexes can significantly improve the performance of database queries. When you query a table, the database engine will use the index to find the rows that match your query criteria. This can save a lot of time, especially if the table is large.

E-commerce website:

Imagine an e-commerce website with millions of products and customers. 

When a customer searches for a specific product, the database needs to quickly find the relevant product information

Without indexes, the database would have to scan the entire product table, which could take a long time and slow down the website. However, with indexes on the product name, product category, and product price, the database can quickly locate the relevant products, resulting in a faster and more responsive user experience.

Here are the key reasons why indexing enhances performance:

Reduced I/O operations:
 Indexes minimize the number of I/O operations required to retrieve data. Instead of reading through the entire table, the database can directly access the relevant data blocks using the index, significantly reducing the number of disk reads and improving overall efficiency.


Efficient data filtering: 
Indexes allow for filtering and sorting data much faster than scanning the entire table. When a query filters data based on specific criteria, the index can quickly identify the rows that satisfy the filter conditions, reducing the amount of data that needs to be processed.


Improved query execution time: 
By reducing I/O operations and filtering data efficiently, indexes significantly reduce the time it takes to execute queries. This is particularly beneficial for complex queries that involve multiple conditions and aggregations.


Enhanced scalability: 
As the size of the dataset grows, indexes become even more crucial for maintaining performance. Without indexes, the time required to scan the entire dataset increases exponentially with data volume. However, indexes allow the database to efficiently locate data regardless of the dataset size, ensuring consistent performance even for large databases.


Reduced overhead on data insertion and updates: 
While creating and maintaining indexes does introduce some overhead, the performance gains from faster data retrieval far outweigh the initial costs. Additionally, modern database systems have optimized indexing techniques to minimize the impact on data insertion and updates.

indexing is an essential technique for optimizing database performance and ensuring efficient data retrieval. It plays a critical role in various applications that handle large amounts of data, such as e-commerce platforms, social media networks, and financial systems.

Multi Column Indexes


A multi-column index is an index that is created on multiple columns in a table. Multi-column indexes can be used to improve the performance of queries that involve multiple columns.

Unique Indexes


A unique index is an index that ensures that each value of the indexed column is unique. This can be used to prevent duplicate data from being entered into the table.

Partial Indexes


A partial index is an index that is created on only a subset of the rows in a table. Partial indexes can be used to improve the performance of queries that involve a specific range of values.

Function skeleton


A function skeleton is the basic structure of a function, including the function name, the return type, and the parameter list.


Create functions


You can create functions using the CREATE FUNCTION statement. The CREATE FUNCTION statement defines the function name, the return type, the parameter list, and the function body.

View and Drop Functions


You can view and drop functions using the SHOW CREATE FUNCTION and DROP FUNCTION statements. The SHOW CREATE FUNCTION statement will show you the definition of a function. The DROP FUNCTION statement will drop a function from the database.

Roles


Roles are used to group users and grant them specific permissions. A role can be granted permissions to access objects in the database.

View Roles


You can view roles using the SHOW ROLES statement. The SHOW ROLES statement will show you a list of all the roles in the database.

Creating roles


You can create roles using the CREATE ROLE statement. The CREATE ROLE statement defines the name of the role and the permissions that the role has.

Privileges


Privileges are granted to users or roles to allow them to perform specific operations on objects in the database. For example, a user may be granted the privilege to create new tables.

Revoking Permissions


You can revoke permissions from users or roles using the REVOKE statement. The REVOKE statement specifies the permissions to revoke and the user or role from which to revoke them.

Member Roles


A member role is a role that is assigned to a user. A user can be assigned to multiple roles, and each role can have different permissions.

What are Schemas


Schemas are used to organize objects in a database. A schema is a collection of related objects, such as tables, views, and functions.

Create Schemas


You can create schemas using the CREATE SCHEMA statement. The CREATE SCHEMA statement defines the name of the schema and the objects that it contains.

Schema Search Path


The schema search path is a list of schemas that the database engine will search for objects when you use them. The schema search path is set at the database level and at the user level.

Grant Schema Usage


You can grant schema usage to users using the GRANT USAGE statement. The GRANT USAGE statement allows a user to access objects in a schema.

Backing up Databases


Backing up databases is important for protecting your data in case of hardware failure or other data loss. There are several ways to back up a database, including using the pg_dump utility or a third-party backup tool.

Restore Database


Restoring a database from a backup is the process of copying the backup data back to the database. There are several ways to restore a database, including using the pg_restore utility or a third-party backup tool.

Backup All databases


You can back up all databases on a system using the pg_dumpall utility. The pg_dumpall utility will create a backup of all the databases on the system, including their data, schema, and privileges.

Database - Normalization

Benefits of Normalization

  • Reduced Data Redundancy: Normalization eliminates duplicate data, saving storage space and improving database efficiency.
  • Improved Data Integrity: Normalization reduces the likelihood of data anomalies, ensuring data accuracy and consistency.
  • Simplified Data Manipulation: Normalization makes data updates and modifications easier and more efficient.
  • Enhanced Database Scalability: Normalization enables the database to accommodate growing data volumes without performance degradation.

Normalization vs. Denormalization

While normalization is generally beneficial, there may be situations where denormalization, the process of intentionally introducing redundancy, is considered. 

This is typically done to improve performance for specific queries or applications. 

However, denormalization should be done with caution, as it can lead to data anomalies and increased maintenance overhead

First Normal Form (1NF)

First Normal Form (1NF): Eliminates repeating groups and ensures that each cell in a table contains a single value.

Example 1: Student Registration System

Original Table:

1NF Table Structure:

Student Table:

Course Table

Second Normal Form (2NF)

2NF Original Table Example 1


2NF Table Structure:

Second Normal Form (2NF): Requires that all non-key attributes be fully dependent on the primary key, eliminating partial dependencies.

Course Table:

Prerequisites Table

2NF Original Table Example 2

Online Book Store Original Table:

2NF Table Structure:

Third Normal Form (3NF)

Third Normal Form (3NF): Eliminates transitive dependencies, ensuring that non-key attributes are directly dependent only on the primary key.

Example: University Course Catalog

Original Table

3NF Table Structure:

Department Table

Prerequisite Table

Instructor table


  • The instructor attribute is not fully dependent on the primary key (Course ID), as there could be multiple instructors assigned to a course. Therefore, the instructor attribute should be moved to a separate instructor table with its own primary key.
  • 3NF is a more stringent form of normalization than 2NF, and it can help to further reduce data redundancy and improve data integrity. However, 3NF can also make database queries more complex, and it may not be necessary for all databases.
  • In general, you should follow 3NF normalization for databases that are subject to frequent updates or that need to maintain a high level of data integrity. However, for databases that are not frequently updated or that do not require a high level of data integrity, 2NF normalization may be sufficient.

When to use each normal form:

1NF:

  • Use 1NF for the most basic level of data organization.
  • It eliminates repeating groups and ensures that each cell contains only one value.
  • It is the minimum requirement for a relational database.

2NF:

  • Use 2NF when you need to further reduce data redundancy and eliminate partial dependencies.
  • It ensures that all non-key attributes are fully dependent on the entire primary key, not just a subset of it.
  • It provides a better balance between data integrity and efficiency compared to 1NF.

3NF:

  • Use 3NF when you need the highest level of data integrity and elimination of transitive dependencies.
  • It ensures that no non-key attribute is transitively dependent on another non-key attribute.
  • It provides the highest level of data integrity but may result in more complex queries.

Here's a general guideline for when to use each normal form:

  • Use 1NF for simple databases with minimal data redundancy.
  • Use 2NF for databases with more complex data relationships and a need to reduce redundancy.
  • Use 3NF for databases with high data integrity (high data accuracy and consistency ) requirements and where transitive dependencies (indirect dependency relationships) can lead to anomalies

Saturday, November 18, 2023

REST - Designing a RESTful API

Designing a RESTful API involves considering various design patterns to ensure scalability, maintainability, and a good developer experience. Here are some common REST API design patterns:

1. Resource Naming:

  • Singular vs. Plural:
    • Pattern: Use plural nouns for resource names (e.g., /users instead of /user).
    • Reason: Plural resource names are more intuitive and reflect the collections of resources.

2. HTTP Methods:

  • Standard HTTP Methods:
    • Pattern: Use standard HTTP methods (GET, POST, PUT, DELETE) for CRUD operations.
    • Reason: Standardization simplifies the API and makes it more intuitive for developers.

3. Resource Nesting:

  • Resource Nesting:
    • Pattern: Use resource nesting for hierarchical relationships (e.g., /users/{userId}/posts).
    • Reason: Reflects the natural hierarchy and relationships between resources.

4. Pagination:

  • Pagination:
    • Pattern: Implement pagination for large collections using query parameters (e.g., /users?page=1&limit=10).
    • Reason: Enhances performance and reduces response size for clients.

5. Filtering, Sorting, and Searching:

  • Filtering, Sorting, and Searching:
    • Pattern: Allow clients to filter, sort, and search resources using query parameters (e.g., /products?category=electronics&sort=price).
    • Reason: Provides flexibility to clients and improves usability.

6. Versioning:

  • API Versioning:
    • Pattern: Include a version number in the API URL (e.g., /v1/users).
    • Reason: Ensures backward compatibility and allows for future changes.

7. HATEOAS (Hypermedia as the Engine of Application State):

  • HATEOAS:
    • Pattern: Include hypermedia links in responses to guide clients on available actions.
    • Reason: Reduces the coupling between the client and server and provides discoverability.

8. Stateless Authentication:

  • Stateless Authentication:
    • Pattern: Use stateless authentication mechanisms like JWT (JSON Web Tokens).
    • Reason: Simplifies server management, improves scalability, and allows for easy distribution.

9. Error Handling:

  • Consistent Error Handling:
    • Pattern: Provide consistent and standardized error responses with meaningful error codes and messages.
    • Reason: Helps developers quickly identify and resolve issues.

10. CORS (Cross-Origin Resource Sharing):

  • CORS Configuration:
    • Pattern: Configure Cross-Origin Resource Sharing headers to control which domains can access the API.
    • Reason: Enhances security by controlling cross-origin requests.

11. Webhooks:

  • Webhooks:
    • Pattern: Allow clients to subscribe to events using webhooks for real-time updates.
    • Reason: Provides a mechanism for asynchronous communication and event-driven architectures.

12. Bulk Operations:

  • Bulk Operations:
    • Pattern: Support bulk operations for efficiency (e.g., bulk updates or deletes).
    • Reason: Reduces the number of requests and improves performance.

13. Rate Limiting:

  • Rate Limiting:
    • Pattern: Implement rate limiting to prevent abuse and ensure fair usage.
    • Reason: Protects the API from abuse and ensures a consistent quality of service.

14. Asynchronous Operations:

  • Asynchronous Operations:
    • Pattern: Provide asynchronous endpoints for lengthy operations, returning status updates.
    • Reason: Allows clients to perform long-running operations without blocking.

These design patterns can be adapted and combined based on the specific requirements and characteristics of your RESTful API

LeetCode C++ Cheat Sheet June

🎯 Core Patterns & Representative Questions 1. Arrays & Hashing Two Sum – hash map → O(n) Contains Duplicate , Product of A...