本地上传
原图
(
VIP专用 )
上传成功
您有新的好友动态
养花风水
With regards to the most recent trends in web development, databases are very important. They are used to function with data, which users generate through various interactions on the website or a web app. In order to create effective web applications, there is a need to design and create a database properly. The sample database can be beneficial for users to understand how to design and create the databases that web applications rely on.
Comprehension of Database Architecture
Certain queries need to be addressed before one delves into the implementation details of a database. These include the scope, scale, and purpose of the database. The focus of this section is the functionality of a relational database management system. So at its most basic, we could say that the purpose of a database is to store tables. Structures are built to hold information within a table, unlike its rows and columns. The rows serve as unique records and the columns hold data description for each individual.
Let us illustrate: for instance, you have a web application that enables users to register and create accounts. If you consider this scenario, almost every application has a User table which consists of the details of all the users who have registered. The primary concept is, by placing data into tables, data is easiest to receive and provide more efficiently.
Identifying the Requirements
Identifying the Requirements alongside website functionalities is essential as the first step when creating a sample database for web application is when considering what kind of data the sample web application is going to be dealing with. For instance, if the web application in question is a social media one, there might be a need in the database: a system table for users, a system table for posts made by users, another for comments posted under a post, and another one that enables liking posts or even following users. A blog that is straightforward, on the other hand, is barely going to need a single one which is for the username and another which would accommodate the blog post.
This clarifies the aim of the database and helps you in determining which tables are required, how those tables will relate, and what kind of data will be stored in these tables. For example, if a post on a social media application is created by a single user account, a single person is the author of a post, and the number of posts that user authors can be multiple. Within that post, required could be the number of comments made, which can also be more than one. This relationship of users, posts, and comments can be represented with the help of multiple tables and relational database principles which shall be discussed further.
Normalisation of the Database
In the Database Management System, normalisation is one of the key concepts. Normalisation is the process of organizing the data in the database in such a way that it reduces redundancy and dependency. By removing duplicate data and ensuring that related tables are designed in such a way that relevant data is present in the corresponding tables, performance and extensibility of the database is improved.
There are various normal forms in database normalization; however, the first, second, and third normal forms are the most widely used.
First Normal Form (1NF)
The table with the specification in its columns ensures that each cell in the column is atomic. It is not permissible for a column to contain more than one atomic value.
Second Normal Form (2NF)
The instruction is that all non-key attributes are fully functionally dependent on the primary key of the table.
Third Normal Form (3NF)
States that all the non-key attributes are only dependent on the key attribute and no other non-key attribute.
There are key aspects that achieve balance between efficiency and effectiveness of the database. Complex tables are easier to build with diagrams and can cut down on the table count but may result in performance lags at times. Understanding the performance enhancement that must be prioritized and the efficiency that must be compromised is crucial to building a successful database.
[图片]How Relationships are Created Between Tables
Establishing relationships between tables is one of the important concepts of relational databases. This allows the database to find relevant information quickly and efficiently. In the case of a typical web application, there are three main types of relationships that you may encounter:
1. One-to-One Relationships
This happens when one record in one table corresponds to exactly one record in another table. For example, if each user had only one profile picture, then there would be a ‘Users’ table and a ‘ProfilePictures’ table, and a picture would be taken of the relationship between the two (one to one).
2. One-to-Many Relationships
This is the most common type of relationship that you will encounter in the web application. This happens when one record in one table corresponds to several records in another table. For instance, one user may write several posts, so between the ‘Users’ and the ‘Posts’ there would be a one-to-many relationship.
3. Many-to-Many Relationships
This type of relationship occurs when a number of records in one table relate to a number of records in another table. In many cases, for example, a user can have many followers and a follower can follow a number of users. It has been seen that such a type of relationship needs a 3rd table that is often called a ‘junction table’ to cater for the many-to-many relationships.
Establishing the appropriate table relationships facilitates the undertaking of complex data manipulations within the web application while guaranteeing a high level of data correctness.
Working with Data Integrity
Data integrity means the accuracy and consistency of the data held in a database. Location of data on a web application calls for data to be accurate and dependable, therefore data integrity has to be upheld. There are various methods of ensuring integrity of data in SQL databases.
One of the ways that significantly aid in maintaining the integrity of data is the use of constraints. These are the rules that govern the data types that can be injected into a table in a relational database. They include the following:
Primary Keys
A primary key is a field or combination of fields that can be described as a unique identifier for records in a table. For instance, for a table entitled “Users,” the primary key could be the users’ ID number.
Foreign Keys
A foreign key establishes a relationship between two tables. It links an entry for one table to the corresponding valid entry for another table.
Unique Constraints
These prevent duplication of data within selected fields in a table. For example, in "Users," a user’s email address must be unique so that the equal addresses do not belong to two users.
...显示更多
养花风水
In the modern, more interconnected world, it is of utmost importance to focus on securing databases. Technology continues to grow exponentially, and in doing so, there are language models and a growing demand for data security. This is vital, whether it is for a bank, a hospital, or just your private information as having a secure database means that there are lower chances of a breach, data loss, and most importantly losing the trust of millions. This article will look at how many best practices can secure databases with a special emphasis on SQL security.
Database Security is Important Because?
Database security is used to prevent unauthorized access, use, modification, or intentional destruction of the database. This protection is important because in case of a data breach, an unlicensed person or group has access to sensitive data such as individual’s information, financials, or business secrets which could lead to disastrous outcomes both for consumers, the company, and its workforce.
This will be a higher-level article so database security strategies will be largely focused on relational databases and what’s the best practice in using systems that utilize an SQL-supported language. Structured Query Language (SQL) is the universal language for the management and functioning of databases. Thus, the best and a wide range of functionalities are available for interacting with the databases using SQL and hence, should be used correctly.
Principle of Least Privilege
One of the important aspects of the integrity of an information system is the principle of least privilege (POLP). This principle involves the allotment of different levels of access for various users or applications. While an employee tasked with database reporting should not have access to delete any records or make changes to the definition of the database.
This sets a risk of a security breach, in an environment where different systems apply this principle. If any user account or application is being compromised, then the case of the damage being done would only be subject to the allocated privileges of that account. With this, numerous chances for incorrect appropriation or unintentional alteration of the databases would be able to be cut down.
User Authentication and Strong Passwords
Authentication in most cases is the act of establishing the identity of a user, or a program, trying to access a database. Making sure that only those users who are permitted to have logins on the database have access to it is key to database security. With the aim of strengthening authentication, SQL databases incorporate usernames and passwords. The only downside is that a weak password might be easily guessed or broken, allowing the attackers through to the system.
Thus, it is recommended that only strong perfect passwords be used. Such passwords would consist of upper and lower case letters, numbers, and special characters. Easily guessable information such as cell phone numbers, birthdates, common similar words, and so on should be avoided as well. Additionally, there is a need for multi-factor authentication (MFA) whereby a user has to provide several forms of verification to be allowed access to the database. This threat must specifically be addressed as it will provide an additional measure to restrict access to the system from unauthorized personnel.
How Does SQL Database Security Work?
In the case of an SQL database, encryption is one of the very useful techniques. The process converts readable plain text data into an encrypted format called ciphertext. Only those with a valid decryption key can transform the ciphertext into the readable plain text.
Encrypting sensitive information such as credit card numbers, identification information, and other health information is important because even if the perpetrator manages to steal the database, they will not be able to make sense of the data and use it. Sensitive data, whether embracing data at rest (this is where data is stored) or data in transit (data under transmission communications of the network), should be encrypted. Many modern databases provide such encryption services, thus it is required to activate these features in order to preserve sensitive data.
[图片]How Can We Update and Patch Databases?
Patching SQL databases is as important as updating them on a regular basis since security fixes, added features, and improved performance is a routine for SQL database vendors. Since new updates come out, we become less protected from risks if we do not adopt the new updates since the new updates may fix potential threats.
A practice of updating the system should be established, and supplementary patches should get done quickly and without delay. Also, for mission-critical systems, consider using an environment in which to stage updates prior to them being deployed in production. In this way, possibilities of new features or small fixes introducing potential issues causing downtimes is minimized.
In addition to ensuring essential updates and fixes have been applied to the database itself, the operating system and other related components should also be kept updated. Remember that unpatched software can be an entry point for attackers.
Monitoring and Auditing of Database Activities
Security is paramount in protecting your databases and one primary way to do that is by consistent auditing and monitoring of the database. By conducting such an audit or monitoring the instigators of database access, the time and date, as well as the actions taken, can be known.
It should not surprise you to know that most SQL database management systems come with logging and auditing capabilities. Such events include failed login attempts, changes in data logins among other things. If you notice some of these behaviors, it is probably worth establishing some alerts regarding them. All these forms of information can be hard but if labeled comprehensively can prove invaluable in warning of insecure behaviors.
You may want to think about using external monitoring software that has additional monitoring capabilities like active threat and anomaly detection with alerts.
Prevention of SQL Injection
One of the weak spots that SQL databases have is an SQL injection. This injection attack involves the insertion of malicious code through one of the input methods or user interfaces like a search or login form, constituting a query. When unobstructed, the hacker can delete or amend the entire database without limit.
Avoiding SQL coding injection is an important concern in any database, application, or website. The first step of tackling SQL injection includes proper sanitization of every user input, which basically comprises the total checking and filtering of every data sent through to an SQL statement. One of the good means of avoiding the coding injection process is using prepared statements and parameterized queries. These methods allow the user to send a command but only that, no attachment of code that can allow an intruder or an attacker to tamper with the commands.
Try to refrain from the use of dynamic SQL queries because they increase the chances of an injection attack ever occurring. You could also reduce the chances of SQL injection through the application of stored procedures.
Data Backups and Disaster Recovery
During a rescue of your database, it is also of utmost importance that a disaster recovery plan is in place along with the security of the database. That is, in the event a user's equipment fails, there could be data loss, natural disasters could strike your equipment, or malicious attacks could simply wipe out the devices. One of the best ways of making sure that your data can be traced back is through regular backups of previously saved information. Backing up databases ensures that a user will increase the chances of being able to go back and get a new and secure state of the sector in which information can be stored.
Make sure that encryption is in place when your backups are being stored. They must be kept in a separate place aside from the local storage to aid in safeguarding them from being damaged in any way. Moreover, testing the backups and restoration processes should be carried out regularly to make sure that they will be of value when needed.
...显示更多
养花风水
Performance is an essential consideration in any database, especially with the expansion of data over growing systems. With SQL being a widely used language for data communication, it's necessary to maximize query performance. Inactive servers, long wait times, especially with large amounts of data, and many other issues plague organizations due to suboptimal queries. Such challenges hinder businesses from streamlining their operations. This article will nail down challenges focused on poor query optimization and offer solutions to enhance SQL performance.
The Basics: What Is SQL Performance?
Retrieving/manipulating any data from a database with the help of an SQL query is referred to as an SQL query performance. **Natural resource wastage**, including the processor, memory space, and even the hard discs, can cause performance issues. Expect long execution periods for any queries made against vast tables that also contain complex joins in the filtering process. Allowing the reallocation of resources will focus on enhancing the performance of other system processes.
Every time a SQL query is issued, the DBMS is tasked with the interpreting of that query to the preprocessing statement, which estimates the order in which the operations are performed so as to produce the output required in the shortest time possible. This, in turn, means that the structure of the query, the available indices, and the size of the data are some of the factors that improve this evaluation step. An important aspect of the optimization of queries is to formulate the SQL statements in such a way as to use the least possible resources and time for execution.
The Significance of Indexing
With indexing being one of the key principles in increasing the speed of the query, there is a great emphasis on how it is utilized. An **index** is a database object which is used to speed up retrieval of rows from the table. When a query is run, the first step DBMS does is search for copies of that data called indexes. Instead of this dreadful full table scan, indexes can be used so that the database can find the required table in a few simple steps.
Indexes are particularly helpful when filtering data using `WHERE` statements or sorting with `ORDER BY`s. They can help in locating the relevant rows among the records stored in a table. It should, however, be emphasized that while retrieval of information is made faster, the actual processes of adding new records and even updating existing ones tend to become slower. This is due to the fact that an index is also updated each time a record is changed.
To optimize the working of queries, it is important to see that the relevant indexes are created for those columns which are used often in `WHERE`, `JOIN` and `ORDER BY` clauses. **Indexing parameters** are very essential for maintenance so that performance is not compromised at the same time.
Reducing the Use of Nested Queries
Nested queries are queries found within other queries called parent queries. In some cases, nested queries could assist in getting the desired results, but if used too commonly or designed poorly, they could decrease performance. Sub-queries often have to be run several times to produce the final information, thus causing the query to be executed terribly. If the scope permits, it is better to change the nested queries into joins, as joins are more effective.
The use of sub-queries in `SELECT` clauses can also be of concern, as the sub-query has to be calculated in relation to each of the rows returned by the outer query. This can add unnecessary complexity and increase execution time. Rather than using sub-queries in this manner, look towards the use of joins or `WITH` clauses which can enhance the clarity and speed of the SQL.
When To Use The Joining Tables
Joining tables is one of the basic functions of the SQL language, but it can be problematic when the tables to be joined are large. There are different factors that have an effect on the performance of joins including the type of join and the join order.
When optimizing joins, it is relevant to know the different types of joins and how they impact performance when running a query. For example, `LEFT JOIN` retrieves all rows from the left table and the matching rows from the right table. We can say that for very large right tables when we need a small subset of rows, an inner join (`INNER JOIN`) can be more useful. Applying this more generally, `INNER JOIN` will give you only the records where there is an overlap in the tables. This way we will only work with the necessary data.
[图片]The order of tables in a join also determines the performance of the query. The order of tables in an equi-join is not important, only the number of rows in the smaller table should be on the left side of the join and maximized to minimize row processes.
Refraining from SELECT * and Fetching More than Required
One of the issues of performance that I am pretty much able to guarantee will come up and be dealt with often is the developers using `SELECT *`, which means all columns in a table are returned. While there are situations when a `SELECT *` query is suitable, there are situations where the user only needs a specific set of columns and queries of that sort to fetch unnecessary columns is rather inefficient. Putting it like this, loading unneeded data escalates the task for the database as it now has to load and send unnecessary data.
Explicitly stating clearly which columns are needed in a query **improves the performance**; this is because it enables the database to limit the amount of data that it has to process, and therefore limits the amount of data that it has to return back to the client, improving the time and the network’s effectiveness.
Reducing Frequency of Complicated SQL Functions in Queries
Furthermore, sophisticated calculations and computations appearing in SQL queries can cause degradation performance. For instance, applying `CASE` statements and aggregations, or even dictating a range of numbers into a formula on other numbers can be costly in terms of computation resources particularly on the volume of transactions. If such principles can be adhered to, then it may be reasonable to move computations into the application or do the work elsewhere.
However, if the arguments of the calculations need to be contained within a query, they should be limited to those that are only relevant to the query. Where possible, use more appropriate `WHERE` clauses to help in filtering data. This helps narrow down the number of rows which need to be used in calculating and therefore increases the performance.
Evaluating Query Execution Plans
Most modern database management systems provide tools to analyze query execution plans. The execution plan is a set of reports detailing how a query was run, where the constriction points of that query were and the exact steps which were involved in the query’s performance. These completion plans also help IT managers and administrators to modify other parts of the process which may not be performing well even without running it.
Witnessing the steps in how a query is executed becomes possible thanks to tools like `EXPLAIN` (MySQL, PostgreSQL) or `SET STATISTICS IO` (in SQL Server). These tools may assist in determining **full table scans**, bad joins or even absent indexes which are a bottleneck for performance.
...显示更多
养花风水
Within the domain of management and usability of data, uploading and downloading are processes of great importance. With the nature of Relational database systems you often need to upload and download data across systems, databases and formats on a regular basis. This is why learning how to import data into a SQL database is vital to ensure smooth completion of your database management activities. In this article the reader is introduced to an outline of the core concepts of uploading and downloading processes in SQL together with their role in database management.
Which Are The Stages Involved In Importing And Exporting A Database?
Exporting data means conversion of the data from an existing database into a format suitable for storage management or even sharing or moving to another location. This would also include transforming data into a CSV, Excel or JSON format to be utilized outside the database management system.
On a different note importing data may be defined as a procedure where data is sourced from a file or a different database into the working database. This is critical in having data from different databases into your active one or even when one needs to change or relocate this data across several systems.
Both these procedures are important in different situations which could be data migration or data backup or in the case when there is a need to link or integrate two or more different databases, or even when moving a data set from its production to the testing environment.
Exporting Data in SQL
The exporting of a database is about adding new information to the database, this new information is put into some desired and distinguishable format. The common way of solving the data export problem is the creation of SQL requests, or tools that allow exporting data onto an external file. This external file can be a CSV or an Excel or a Text file.
One of the possible solutions to the problem is the use of a SELECT statement to have some of the output data stored in a file format. In most SQL database management systems (DBMS) there is a standard complement of data export utilities. This auto eliminates the necessity to provide such users any options for the selection of file formats, delimiters, etc.
A very simple example is that whenever bulk data has to be transferred, one of the first things people do is change the data format to CSV. The reason being, CSV files take less space and can be easily utilized with other applications such as the Microsoft Excel.
The exporting process generally involves the following steps:
1. Deciding which data to export: In this step the user has to furnish the tables or the fields to be exported only.
2. Inputting the appropriate file type: In most of the cases the data is exported using such formats as CSV, Excel, or basic text documents.
3. Setting up the export technique: Such tasks may include engaging in database operations, use of certain tools or other applications.
Once being exported, users can make data available for others, conveniently make copies of essential data, or move it to another system for further use.
Exporting Data through SQL
Moving data in is regarded as the opposite of moving data out. Instead of sending data outside, users send data inside the current database from some external file or document. This process is often required in cases such as bringing together data from different sources, or when the data is required in a new database.
In most cases, one will have to indicate the file which contains particular data (such as CSV or Excel), what its components and attributes correspond to in the database, and then issue a particular command for the information to be incorporated into the database. Newer SQL databases allow the user to import files of all formats that can be done through SQL commands, tools that the database gets provided with, or even databases that have a separate import feature.
[图片]Some general techniques for uploading the information are:
1. The `LOAD DATA` command is the most widely used for importing data into the system that meets the two conditions. The first one is that the volume of the data must be huge, and the second one is that it is most commonly used for loading data from files into a database – tables.
2. `INSERT INTO` command is when individual rows of data would be inserted using the `INSERT` statement.
3. Import tools that are provided as supplementary tools with the database. Many relational databases i.e. MySQL, PostgreSQL and Microsoft SQL Server have UI’s or command line tools where the user can easily import data to the database.
Importing or exporting data from a database system involves:
1. This may come in the form of the CSV format, Excel files or even JSON. All these formats are acceptable. In this point, we have to present the general purpose of what we are doing which is to identify the source where the data is being taken.
2. In transferring data, one must first ensure that the column with the data in the source file maps correctly to the column in the destination table that is intended to contain that data.
3. Running or executing a SQL command or tool that will enable to place the intended data into the required database.
As an example for this case, to transfer a CSV file it would be then necessary to read each row or line and then insert the information into the required row and column in the target database table.
Things to Look at When Exporting and Importing Data
Exporting and importing data is not as challenging as it may seem. There are different aspects that should be looked at in order for the operation to go well and in a smooth manner.
1. Integrity of Data – E & I:
It is critical to not lose or corrupt any relevant data throughout the exporting and importing process. Always verify that the source and destination tables are structural replicas and that corresponding data is put into them.
2. Data Source File Format Needs to Match Target:
There is a need to ensure the correct match of the source and target, which may require consideration of field delimiters, encoding, and date formats. For example, CSV and MS Excel are both a source file but targeting a database management system.
3. How to Deal With Large Dataset While Exporting/Importing Data:
Such processes can be time and resource consuming and thus if breaking the dataset into small chunks may be the solution, so be it. It may also be beneficial as there will be a decrease in performance issues or errors.
4. Data Validation Verification After Importing Data:
Just as any database import requires an additional validity check: did the new data get inserted? This can be achieved by taking a sample of data added and matching it with the original sheet or by executing consistency checks.
5. Permissions and Security:
Confirm that the user who performs any export or import operation is endowed with the basic required permissions. One should also keep in mind security issues, more so for certain types of information. It is wise to deploy requisite encryption and access control.
6. Error Handling:
Both during exporting and during importing, some errors or problems such as format of the data being sent and received does not correspond or communication link failure take place. It is important that there are error detection and error correction mechanisms established in the system to enable quick diagnosis and rectification of the encountered problems.
...显示更多
养花风水
The Structured Query Language or more simply SQL is famous for a rather simple reason, it is possibly the most used way to deal with large datasets spread across several tables in a relational database. Out of its many functionalities, window functions are a nice feature for data analytics. Window functions allow analyse and aggregate data in more sophisticated ways, as they provide the ability to perform calculations over a set of rows that are in some way related to the current row under consideration. Knowing window functions significantly changes the game for any professional who is heavily working with data.
Which type of tasks are best suited for window functions?
A window function computes directly on a set of rows which are related to the current row. In contrast to normal aggregate functions where one or more groups of rows will return a single answer, window functions can perform a calculation whilst leaving the actual rows intact and place the result based on a calculation extending over a “window” or a cluster of rows. This way some more elaborate analysis can be done while still keeping the level of details intact.
The idea of a window function is rooted in the definition of “windows” or groups of rows within a result set. For instance, when calculating a moving average of stock prices for the last 7 days, the “window” would encompass all the data for the last week, in this case the moving average would be calculated for every row with respect to other rows within the window.
The Basic Elements for Window Functions
Three components are the basic building blocks of window functions:
1. The Function:
Here we have a particular computation that is done. For example, window functions may be aggregates like COUNT(), thus sums or averages, but they may also include some ranking functions like NTILE() or ROW_NUMBER().
2. The OVER Clause:
This clause describes the range of rows over which the function should be applicable. The window can be ordered, partitioned, or unbounded, depending on the analysis being performed.
3. The Partition By and Order By Clauses:
These are optional clauses that narrow down the window further. The `PARTITION BY` clause breaks the dataset into small-scale pieces (partitions) prior to the application of the window function while the `ORDER BY` clause explains the sequence the rows in each partition are followed.
Types of Window Functions
Different window functions exist that aid in helping in data analysis, each designed to perform particular functions and serve a specific goal.
1. Ranking Functions
Within a Partition, all the rows are ranked and this ranking is done using the ranking functions. A good example is the `ROW_NUMBER()` function where every row is assigned a unique number starting from 1 in the case that the first row is the first in the partition. For the situation where ties exist, `RANK()` and `DENSE_RANK()` are the ranking functions used.
2. Aggregate Functions
These are types of functions which sit on the rows that exist within a given pivot and return a single result per row but don’t condense the entire result into one single row as one would with aggregate functions. The use of window functions on `SUM()`, `AVG()`, `MIN()`, and `MAX()` can be done as a means to capture total values, the average and extreme values for each partition.
3. Analytic Functions
Analytic functions serve a specific purpose where you want values that do not involve aggregation but still have a ‘window’ of rows involved. For this, functions such as `LEAD()` and `LAG()` which give access to the next or the preceding rows of the result set or `FIRST_VALUE()` and `LAST_VALUE()` which give the first and the last value in a window, respectively, are used.
4. NTILE()
The `NTILE()` function allows splitting the result set into a given number of buckets or tiles of equal numbers of rows as close as possible to equal. This function may be helpful in generating quartiles or percentiles which are typically used in data analytics for looking at the distribution of data.
[图片]Why Are Window Functions Important for Data Analytics?
Analysts are usually reluctant to use complex window functions in data analysis because they do not need to aggregate data sets. The ability to compute for example moving averages or ranks of rows based on certain conditions without having to aggregate data is very powerful in terms of insights it gives.
1. More Flexible Analysis
Instead of losing context about a particular dimension, analysts are able to perform multiple calculations against the same dimension in the context of window functions. For example, you can compute a column that contains the total of sales per each row without losing the sales of each transaction.
2. Efficient Calculations
Window functions remove the need to perform computations on aggregate summaries, hence reevaluation and aggregation of the rows is performed at the data set level. This was shown to be more efficient than the statistical techniques, especially when large sets of data were involved.
3. Improved Reporting
Window functions are extremely useful when reportability is the focus. Be it calculating total sales ever made, making a rank-basis evaluation of the employees concerned, or making an analysis, the report always seems to be more efficient and easy with the help of window functions.
4. Data Analytics Enhancement
Percentiles can be calculated, moving averages can be derived and identifying patterns in time series data becomes relatively easier when employing window functions. The defined functions give the cognitive ability of the data analysts to extend themselves to the provision of more insight of the data without defining numerous subqueries or additional tables.
Scenarios for the Window Function
There are numerous scenarios where window functions can be applied in data analytics. For instance, while performing a data analysis regarding sales, it might be desirable to get a moving average of sales of each item for a time-frame. It is feasible to achieve this by using Window Functions in SAS, performing one calculation for one row and retaining all other pieces of information.
Another popular scenario is ranking. If you want to rank employees in different departments based on certain metrics the employees possess. You can use tiered implementation of window functions across the same database to assign a rank to them along with enabling evaluation across departments.
In analysis of time series data, Window functions come in handy as well. For example, while keeping monthly data intact, you can always add up the months’ sales to get a yearly total or you can calculate the day-on-day sales with the help of LAG or LEAD functions.