parent
577e724aa7
commit
204a9577cd
148 changed files with 450 additions and 144 deletions
Binary file not shown.
After Width: | Height: | Size: 614 KiB |
@ -1 +1,3 @@ |
||||
# Client side |
||||
# Client Side |
||||
|
||||
In game development, the term "Client Side" refers to all the operations and activities that occur on the player's machine, which could be a console, computer, or even a phone. The client side is responsible for rendering graphics, handling input from the user and sometimes processing game logic. This is in contrast to the server-side operations, which involve handling multiplayer connections and synchronizing game states among multiple clients. On the client side, developers need to ensure performance optimization, smooth UI/UX, quick load times, and security to provide an engaging, lag-free gaming experience. Security is also crucial to prevent cheating in multiplayer games, which can be tackled through measures like Data obfuscation and encryption. |
@ -0,0 +1,3 @@ |
||||
# Note |
||||
|
||||
These roadmaps cover everything that is there to learn for the paths listed below. Don't feel overwhelmed, you don't need to learn it all in the beginning if you are just getting started. |
@ -1 +0,0 @@ |
||||
# React roadmap note |
@ -1 +1,3 @@ |
||||
# Linear algebra |
||||
# Linear Algebra |
||||
|
||||
Linear Algebra is a vital field in Mathematics that is extensively used in game development. It revolves around vector spaces and the mathematical structures used therein, including matrices, determinants, vectors, eigenvalues, and eigenvectors, among others. In the context of game development, linear algebra is used mainly for computer graphics, physics, AI, and many more. It allows developers to work with spatial transformations, helping them manipulate and critically interact with the 3D space of the game. On a broader context, it is important in computer programming for algorithms, parallax shifting, polygonal modeling, collision detection, etc. From object movements, positional calculations, game physics, to creating dynamism in games, linear algebra is key. |
@ -1 +1,3 @@ |
||||
# Vector |
||||
# Vector |
||||
|
||||
`Vector` in game development is a mathematical concept and an integral part of game physics. It represents a quantity that has both magnitude and direction. A vector can be used to represent different elements in a game like positions, velocities, accelerations, or directions. In 3D games, it's commonly used to define 3D coordinates (x, y, z). For example, if you have a character in a game and you want to move it up, you'd apply a vector that points upward. Hence, understanding how to manipulate vectors is a fundamental skill in game development. |
@ -1 +1,3 @@ |
||||
# Matrix |
||||
# Matrix |
||||
|
||||
In game development, a **matrix** is a fundamental part of game mathematics. It's a grid of numbers arranged into rows and columns that's particularly important in 3D game development. These matrices are typically 4x4, meaning they contain 16 floating point numbers, and they're used extensively for transformations. They allow for the scaling, rotation, and translation (moving) of 3D vertices in space. With matrices, these transformations can be combined, and transformed vertices can be used to draw the replicas of 3D models into 2D screen space for rendering. |
@ -1 +1,3 @@ |
||||
# Geometry |
||||
# Geometry |
||||
|
||||
Geometry in game development refers to the mathematical study used to define the spatial elements within a game. This is vital in determining how objects interact within a game's environment. Particularly, geometry is employed in various aspects like object rendering, collision detection, character movement, and the calculation of angles and distance. It allows developers to create the spatial parameters for a game, including object dimensions and orientations. Understanding the basics such as 2D vs 3D, polygons, vertices, meshes and more advanced topics such as vectors, matrices, quaternions etc. is crucial to this field. |
@ -1 +1,3 @@ |
||||
# Linear transformation |
||||
# Linear Transformation |
||||
|
||||
`Linear transformations` or `linear maps` are an important concept in mathematics, particularly in the fields of linear algebra and functional analysis. A linear transformation can be thought of as a transformation that preserves the operations of addition and scalar multiplication. In other words, a transformation T is linear if for every pair of vectors `x` and `y`, the equation T(x + y) = T(x) + T(y) holds true. Similarly, for any scalar `c` and any vector `x`, the equation T(cx)=cT(x) should also hold true. This property makes them very useful when dealing with systems of linear equations, matrices, and in many areas of computer graphics, including game development. |
@ -1 +1,3 @@ |
||||
# Affine space |
||||
# Affine Space |
||||
|
||||
In the context of game mathematics, an **Affine Space** is a fundamental concept you should understand. It is a geometric structure with properties related to both geometry and algebra. The significant aspect of an affine space is that it allows you to work more comfortably with points and vectors. While a vector space on its own focuses on vectors which have both magnitude and direction, it does not involve points. An affine space makes it easy to add vectors to points or subtract points from each other to get vectors. This concept proves extremely useful in the field of game development, particularly when dealing with graphical models, animations, and motion control. |
@ -1 +1,3 @@ |
||||
# Affine transformation |
||||
# Affine Transformation |
||||
|
||||
An **affine transformation**, in the context of game mathematics, is a function between affine spaces which preserves points, straight lines and planes. Also, sets of parallel lines remain parallel after an affine transformation. In video games, it's typically used for manipulating an object's position in 3D space. This operation allows game developers to perform multiple transformations such as translation (moving an object from one place to another), scaling (changing the size of an object), and rotation (spinning the object around a point). An important feature of affine transformation is that it preserves points uniqueness; if two points are distinct to start with, they remain distinct after transformation. It's important to note that these transformations are applied relative to an object's own coordinate system, not the world coordinate system. |
@ -1 +1,3 @@ |
||||
# Quaternion |
||||
# Quaternion |
||||
|
||||
The **quaternion** is a complex number system that extends the concept of rotations in three dimensions. It involves four components: one real and three imaginary parts. Quaternions are used in game development for efficient and accurate calculations of rotations and orientation. They are particularly useful over other methods, such as Euler angles, due to their resistance to problems like Gimbal lock. Despite their complex nature, understanding and implementing quaternions can greatly enhance a game's 3D rotational mechanics and accuracy. |
@ -1 +1,3 @@ |
||||
# Spline |
||||
# Spline |
||||
|
||||
`Spline` is a mathematical function widely used in computer graphics for generating curves and surfaces. It connects two or more points through a smooth curve, often used in games for defining pathways, movement paths, object shapes, and flow control. Splines are not confined to two dimensions and can be extended to 3D or higher dimensions. Types of splines include `Linear`, `Cubic`, and `Bezier` splines. While linear splines generate straight lines between points, cubic and bezier splines provide more control and complexity with the addition of control points and handles. Developing a good understanding of splines and their usage can vastly improve the fluidity and visual aesthetics of a game. |
@ -1 +1,3 @@ |
||||
# Euler angle |
||||
# Euler Angle |
||||
|
||||
The **Euler angle** is a concept in mathematics and physics used to describe the orientation of a rigid body or a coordinate system in 3D space. It uses three angles, typically named as alpha (α), beta (β), and gamma (γ), and represents three sequential rotations around the axes of the original coordinate system. Euler angles can represent any rotation as a sequence of three elementary rotations. Keep in mind, however, that Euler angles are not unique, and different sequences of rotations can represent identical total effects. It's also noteworthy that Euler angles are prone to a problem known as gimbal lock, where the first and third axis align, causing a loss of a degree of freedom, and unpredictable behavior in particular orientations. |
@ -1 +1,3 @@ |
||||
# Hermite |
||||
# Hermite |
||||
|
||||
Hermite refers to Hermite interpolation, a fundamental technique in game development for executing smooth transitions. Essentially, Hermite interpolation is an application of polynomial mathematics, with two points applied as start/end (they're usually 3D positional vectors), and the tangents at these points controlling the curve's shape. The technique's name is derived from its inventor, Charles Hermite, a French mathematician. Hermite interpolation can be useful in different aspects of game development, such as creating smooth animations, camera paths, or motion patterns. Note, however, that while Hermite interpolation offers control over the start and end points of a sequence, it might not precisely predict the curve's full behavior. |
@ -1 +1,3 @@ |
||||
# Bezier |
||||
# Bezier |
||||
|
||||
`Bezier curves` are named after Pierre Bezier, a French engineer working at Renault, who used them in the 1960s for designing car bodies. A Bezier curve is defined by a set of control points with a minimum of two but no upper limit. The curve is calculated between the first and the last control point and does not pass through the controlling points, which only influence the direction of the curve. There are linear, quadratic, and cubic Bezier curves, but curves with more control points are also possible. They are widely used in computer graphics, animations, and are extensively used in vector images and tools to create shapes, texts, and objects. |
@ -1 +1,3 @@ |
||||
# Catmull rom |
||||
# Catmull-Rom |
||||
|
||||
The **Catmull-Rom** spline is a form of interpolation used in 2D and 3D graphics. Named after Edwin Catmull and Raphael Rom, it offers a simple way to smoothly move objects along a set of points or, in terms of graphics, to smoothly draw a curve connecting several points. It's a cubic interpolating spline, meaning it uses the cubic polynomial to compute coordinates. This makes Catmull-Rom ideal for creating smooth and natural curves in graphics and animation. It also has a feature called C1 continuity, ensuring the curve doesn't have any abrupt changes in direction. However, if not managed properly, it can create loops between points. |
@ -1 +1,3 @@ |
||||
# Orientation |
||||
# Curve |
||||
|
||||
In the context of game development, **Orientation** refers to the aspect or direction in which an object is pointed in a 3D space. To determine an object's orientation in 3D space, we typically use three angles namely: pitch, yaw, and roll collectively known as Euler's angles. **Pitch** is the rotation around the X-axis, **Yaw** around the Y-axis and **Roll** around the Z-axis. Alternatively, orientation can also be represented using a Quaternion. Quaternions have the advantage of avoiding a problem known as Gimbal lock (a loss of one degree of freedom in 3D space), present when using Euler's angles. |
@ -1 +1,3 @@ |
||||
# Perspective |
||||
# Perspective |
||||
|
||||
In game development, **Perspective** plays a significant role in creating a three-dimensional world on a two-dimensional space. It mimics the way our eyes perceive distance and depth, with objects appearing smaller as they go farther away. Essentially, this is achieved by projecting 3D co-ordinates on a virtual screen. Perspective projection is done in two types - one-point where only one axis displays a change in size with depth and two-point where both axis display a change. It creates more realistic views, enhancing game visualization and immersion. An important aspect is the player's viewpoint, which is the vanishing point where parallel lines converge in the distance. |
@ -1 +1,3 @@ |
||||
# Orthogonal |
||||
# Orthogonal |
||||
|
||||
Orthogonal projection, or orthographic projection, is a type of parallelogram projection in game development where the lines of projection are perpendicular to the projection plane. This creates a view that is straight-on, essentially removing any form of perspective. Unlike perspective projection where objects further from the viewer appear smaller, objects in orthogonal projection remain the same size regardless of distance. The lack of perspective in orthogonal projection can be useful for specific types of games like platformers or strategy games. It is commonly used in CAD (Computer-Aided Design) and technical drawings as well. |
@ -1 +1,3 @@ |
||||
# Projection |
||||
# Projection |
||||
|
||||
`Projection` in game mathematics often refers to the method by which three-dimensional images are transferred to a two-dimensional plane, typically a computer screen. There are two main types of projection in game development; `Orthographic Projection` and `Perspective Projection`. In the Orthographic Projection, objects maintain their size regardless of their distance from the camera. This is often used in 2D games or 3D games where perspective is not important. On the other hand, Perspective Projection mimics human eye perspective, where distant objects appear smaller. This method provides a more realistic rendering for 3D games. It's crucial to understand projection in game development because it governs how virtual 3D spaces and objects are displayed on 2D viewing platforms. |
@ -1 +1,3 @@ |
||||
# Game mathematics |
||||
# Game Mathematics |
||||
|
||||
"Game Mathematics" is a critical aspect of game development that deals with the use of mathematical concepts to create and control game mechanics. This involves areas such as geometry for 3D modelling, logic for game rules, algebra for scoring systems, and trigonometry for movements or trajectories. Understanding game mathematics enables developers to implement features like physics simulation, AI behaviours, and procedural generation. Advanced topics include complex calculations for graphics (e.g., shaders, lighting) and calculus for continuous animation or advanced physics. The mathematical complexity depends on the game's demands, but a solid foundation is crucial for any game developer. |
@ -1 +1,3 @@ |
||||
# Center of mass |
||||
# Center of Mass |
||||
|
||||
The **center of mass** is a position defined relative to an object or system of objects. Typically denoted by the symbol \(COM\), it refers to the average position of all the parts of the system, weighted according to their masses. For instance, if you have a uniformly dense object, the center of mass would be in the geometric center of that object. In gaming, the center of mass of an object can have a significant impact on how the object behaves when forces are applied to it. This includes how the object moves in response to these forces, and can affect the realism of the physics simulations in a game. |
@ -1 +1,3 @@ |
||||
# Acceleration |
||||
# Acceleration |
||||
|
||||
**Acceleration** refers to the rate of change in velocity per unit time. This physical concept is translated into game dynamics where it impacts the movement and speed of game characters or objects. For example, when a character starts moving, there is usually a slight delay before they reach their top speed, which then continues as long as the move button is held down. This is caused by acceleration. Conversely, when the button is released, the character doesn't stop instantly but slows down gradually - this is due to deceleration, which is negative acceleration. By mastering acceleration and deceleration, game developers can create more realistic and interesting movements for their characters. |
@ -1 +1,3 @@ |
||||
# Force |
||||
# Force |
||||
|
||||
**Force** is a vital concept in game development, especially when crafting physics in games. In the context of game physics, 'Force' is an influence that causes an object to undergo a certain change, either concerning its movement, direction, or geometrical construction. It's typically implemented in game engines, with part of the physics simulation that computes forces like gravity, friction, or custom forces defined by the developer. Incorporating forces gives a realistic feel to the game, allowing objects to interact naturally following the laws of physics. This is central in genres like racing games, sports games, and any game featuring physical interactions between objects. Remember that `F = ma`, the acceleration of an object is directly proportional to the force applied and inversely proportional to its mass. The balance and manipulation of these forces are integral to dynamic, immersive gameplay. |
@ -1 +1,3 @@ |
||||
# Angular velocity |
||||
# Angular Velocity |
||||
|
||||
Angular velocity, denoted by the symbol 'ω', is a measure of the rate of change of an angle per unit of time. In simpler terms, it corresponds to how quickly an object moves around a circle or rotates around a central point. Angular velocity is typically measured in radians per second (rad/s). If you think of an object moving in a circular path, the angular velocity would be the speed at which the angle changes as the object travels along the circumference of the object. Angular velocity is a vector quantity, implying it has both magnitude and direction. The direction of the angular velocity vector is perpendicular to the plane of rotation, following the right-hand rule. It plays a crucial role in game development, especially in physics simulation and character control. |
@ -1 +1,3 @@ |
||||
# Linear velocity |
||||
# Linear Velocity |
||||
|
||||
**Linear Velocity** is a fundamental concept in physics that is extensively used in game development. It refers to the rate of change of an object's position with respect to a frame of reference. It's calculated by dividing the change in position by the change in time, often represented with the vector 'v'. In game development, an object's linear velocity can be manipulated to control its speed and direction. This is especially important in the development of physics simulations or movement-dependent gameplay elements. For instance, it can be used to make a character run or drive, or to throw an object at different speeds and directions. |
@ -1 +1,3 @@ |
||||
# Moment of inertia |
||||
# Moment of Inertia |
||||
|
||||
The **moment of inertia**, also known as rotational inertia, is a measure of an object's resistance to changes to its rotation. In simpler terms, it's essentially how difficult it is to start or stop an object from spinning. It is determined by both the mass of an object and its distribution of mass around the axis of rotation. In the context of game development, the moment of inertia is crucial for creating realistic movements of characters, objects or vehicles within the game. This is particularly relevant in scenarios where the motion involves spinning or revolving entities. Calculating and applying these physics ensures a more immersive and believable gaming experience. |
@ -1 +1,3 @@ |
||||
# Joints |
||||
# Joints |
||||
|
||||
Joints in game development primarily refer to the connections between two objects, often used in the context of physics simulations and character animations. These might simulate the physics of real-world joints like hinges or springs. Developers can control various characteristics of joints such as their constraints, forces, and reactions. The different types come with various properties suitable for specific needs. For example, Fixed joints keep objects together, Hinge joints allow rotation around an axis, and Spring joints apply a force to keep objects apart. |
@ -1 +1,3 @@ |
||||
# Restitution |
||||
# Restitution |
||||
|
||||
In game development, **Restitution** is a property closely related to the physics of objects. Essentially, restitution represents the "bounciness" of an object or, in more scientific terms, the ratio of the final relative velocity to the initial relative velocity of two objects after a collision. In the context of game physics, when objects collide, restitution is used to calculate how much each object should bounce back or recoil. Restitution values typically fall between 0 and 1 where a value of 0 means an object will not bounce at all and a value of 1 refers to a perfectly elastic collision with no energy lost. Therefore, the higher the restitution value, the higher the bounce back of the object after a collision. |
@ -1 +1,3 @@ |
||||
# Buoyancy |
||||
# Buoyancy |
||||
|
||||
**Buoyancy** refers to a specific interaction in physics where an object submerged in fluid (such as a game character in water) experiences an upward force that counteracts the force of gravity. This makes the object either float or appear lighter. In game development, implementing buoyancy can enhance realism particularly in games that involve water-based activities or environments. Buoyancy can be manipulated through adjustments in density and volume to create various effects - from making heavy objects float to sinking light ones. Calculating it typically requires approximating the object to a sphere or another simple geometric shape, and using this in Archimedes' Principle. This principle states that buoyant force equals the weight of the fluid that the object displaces. In the realm of video games, programming buoyancy can involve complex physics equations and careful testing to achieve a balance between realism and playability. |
@ -1 +1,3 @@ |
||||
# Friction |
||||
# Friction |
||||
|
||||
`Friction` is a crucial concept in game dynamics. In the context of games, it's typically used to slow down or impede movement, providing a realistic feel to characters or objects movement. For example, when a player's character runs on a smooth surface as compared to a rough one, friction influences the speed and control of that character. It can be seen in how cars skid on icy surfaces, how walking speed changes depending on the terrain, or how a ball rolls and eventually slows. The equation to compute friction is usually `f = μN`, where `f` is the force of friction, `μ` is the coefficient of friction (which depends on the two surfaces interacting), and `N` is the normal force (which is generally the weight of the object). You can adjust the coefficient of friction in a game to have different effects depending upon the desired outcome. |
@ -1 +1,3 @@ |
||||
# Dynamics |
||||
# Dynamics |
||||
|
||||
**Dynamics** in game physics refers to the calculation and simulation of the movement and interaction of objects over time, taking into account properties such as mass, force, and velocity. Its purpose is to ensure the motion of game elements matches expectations in the real-world, or the specific conditions defined by the game designers. This typically includes topics like kinematics (velocity and acceleration), Newton's laws of motion, forces (like gravity or friction), and conservation laws (such as momentum or energy). This also involves solving equations of motions for the game objects, detecting collisions and resolving them. Dynamics, together with Statics (dealing with how forces balance on rigid bodies at rest), makes up the core of game physics simulation. |
@ -1 +1,3 @@ |
||||
# Ccd |
||||
# CCD |
||||
|
||||
**CCD (Continuous Collision Detection)** is a sophisticated technique used in detecting collisions within games, more advanced than the traditional discrete collision. Rather than checking for collisions at designated time frames, CCD checks for any possible collisions that may happen during the entire time period or motion path of the moving object. This can prevent instances of "tunneling", where an object moves so fast that it passes through walls or obstacles undetected by discrete collision detection due to being at different points in one frame to another. Although more computationally heavy than discrete detection, CCD offers an increased accuracy in collision detection, making it vital in games where precise movements are needed. |
@ -1 +1,3 @@ |
||||
# Convex decomposition |
||||
# Convex Decomposition |
||||
|
||||
`Convex Decomposition` represents a process within game development that involves breaking down complex, concave shapes into simpler, convex shapes. This technique considerably simplifies the computation involved in collision detection, a critical aspect of any game development project that involves physical simulations. In concrete terms, a concave shape has one or more parts that 'cave in' or have recesses, while a convex shape has no such depressions - in simplistic terms, it 'bulges out' with no interior angles exceeding 180 degrees. So, Convex decomposition is essentially a process of breaking down a shape with 'caves' or 'recesses' into simpler shapes that only 'bulge out'. |
@ -1 +1,5 @@ |
||||
# Concave |
||||
# Concave |
||||
|
||||
# Concave |
||||
|
||||
In game development, a shape is said to be "concave" if it has an interior angle greater than 180 degrees. In simpler terms, if the shape has a portion "inwards curved" or a "cave-like" indentation, it's concave. Unlike convex shapes, a straight line drawn within a concave shape may not entirely lie within the boundaries of the shape. Concave shapes add complexity in game physics, especially in collision detection, as there are more points and angles to consider compared to convex shapes. These shapes are commonly seen in game elements like terrains, mazes, game level boundaries and gaming characters. Let's remember that the practical application of concave shapes largely depends on the gameplay requirements and the level of realism needed in the game. |
@ -1 +1,3 @@ |
||||
# Convex hull |
||||
# Convex Hull |
||||
|
||||
The **Convex Hull** is a foundational concept used in various areas of game development, particularly in the creation of physics engines and collision detection. Essentially, it is the smallest convex polygon that can enclose a set of points in a two-dimensional space, or the smallest convex polyhedron for a set of points in a three-dimensional space. It can be thought of as the shape that a rubber band would take if it was stretched around the points and then released. In computational geometry, various algorithms like Graham's Scan and QuickHull have been developed to compute Convex Hulls rapidly. Using Convex Hulls in game engines can drastically improve the performance of collision detection routines as fewer points need to be checked for overlap, which in turn helps in creating smoother gameplay. |
@ -1 +1,3 @@ |
||||
# Convex |
||||
# Convex |
||||
|
||||
The term "convex" in game development relates primarily to shapes and collision detection within the gaming environment. A shape is convex if all line segments between any two points in the shape lie entirely within the shape. This is an essential concept when programming collision detection and physics engines in games since the mathematical calculations can be more straightforward and efficient when the objects are convex. In addition to this, many rendering algorithms also operate optimally on convex objects, thereby helping improve the game’s graphical performance. |
@ -1 +1,3 @@ |
||||
# Convexity |
||||
# Convexity |
||||
|
||||
Convexity is a significant concept used in game development, particularly in the narrow phase of collision detection. A shape is considered convex if, for every pair of points inside the shape, the complete line segment between them is also inside the shape. Essentially, a convex shape has no angles pointing inwards. Convex shapes can be of great benefit in game development because they're simpler to handle computationally. For instance, in collision detection algorithms such as separating axis theorem (SAT) and Gilbert–Johnson–Keerthi (GJK), the input shapes are often convex. Non-convex shapes or concave shapes usually require more complex methods for collision detection, often involving partitioning the shape into smaller convex parts. |
@ -1 +1,3 @@ |
||||
# Narrow phase |
||||
# Narrow Phase |
||||
|
||||
The **Narrow Phase** of collision detection is a process that dives deeply into detailed collision checks for pairs of objects that are already found to be potentially colliding during the broad phase. Narrow phase is essentially a fine-tuning process. Upon positive detection from the broad phase, it identifies the precise points of collision between the two objects, and it may involve more detailed shape representations and more expensive algorithms. It might also calculate additional information necessary for the physics simulation (like the exact time of impact and contact normals). The usual methods used for this phase involve bounding box, bounding sphere or separating axis theorem. However, the method can vary depending on the complexity of shapes of objects and the specific needs of the game. |
@ -1 +1,3 @@ |
||||
# Epa |
||||
# EPA |
||||
|
||||
The **EPA**, also known as the *Environmental Protection Agency*, is not typically related to game development or the concept of intersection within this context. However, in game development, EPA might refer to an 'Event-driven Process chain Architecture' or some other game-specific acronym. In this domain, different terminologies and acronyms are often used to express complex architectures, designs, or functionalities. If you have encountered EPA in a game development context, it might be best to refer to the specific documentation or guide where it was described for a better understanding. Understanding the context is key to untangle the meaning of such abbreviations. |
||||
|
@ -1 +1,5 @@ |
||||
# Gjk |
||||
# GJK |
||||
|
||||
The **GJK algorithm** (Gilbert–Johnson–Keerthi) is a computational geometry algorithm that is widely used to detect collisions between convex objects in video games and simulations. The primary role of this algorithm is to assess the intersection between two convex shapes. What makes it unique and widely used is its efficiency and accuracy even when dealing with complex three-dimensional shapes. It uses the concept of "Minkowski Difference" to simplify its calculations and determine if two shapes are intersecting. |
||||
|
||||
The algorithm works iteratively, beginning with a single point (the origin) and progressing by adding vertices from the Minkowski Difference, each time refining a simple 'guess' about the direction of the nearest point to the origin until it either concludes that the shapes intersect (the origin is inside the Minkowski difference), or until it can't progress further, in which case the shapes are confirmed not to intersect. This makes it an incredibly powerful and useful tool for game developers. |
@ -1 +1,3 @@ |
||||
# Intersection |
||||
# Intersection |
||||
|
||||
`Intersection` is a concept in the narrow phase of game development where the exact point or points of collision are determined between two potentially colliding objects. This process takes place once a potential collision is determined in the broad phase. Algorithms such as Axis-Aligned Bounding Boxes (AABB), Separating Axis Theorem (SAT), Spherical or Capsule bounding, and many others are used for different intersection tests based on the shape of the objects. The intersection provides valuable data such as the point of contact, direction and depth of penetration, which are used to calculate the accurate physical response in the collision. |
@ -1 +1,3 @@ |
||||
# Sat |
||||
# SAT |
||||
|
||||
`Sat`, or separating axis theorem, is frequently used in collision detection in game development. Its primary benefit is for simple and fast detection of whether two convex polygons intersect. The theorem is somewhat complex—it works by projecting all points of both polygons onto numerous axes around the shapes, then checking for overlaps. However, it can be relatively time-consuming when dealing with more complex models or numerous objects as it has to calculate the projections, so often it is used in a broad-phase detection system. A deep explanation of how `sat` works might involve some mathematical concepts or visual aids, but this is the foundation of its use in game development. |
@ -1 +1,3 @@ |
||||
# Aabb |
||||
# AABB |
||||
|
||||
`AABB`, short for Axis-Aligned Bounding Box, is a commonly used form of bounding volume in game development. It is a box that directly aligns with the axes of the coordinate system and encapsulates a game object. The sides of an AABB are aligned with the axes, which is helpful when carrying out certain calculations, as non-axis-aligned boxes would require more complex math. AABBs are primarily used for broad-phase collision detection, which means checking whether two objects might be in the process of colliding. Although AABBs are relatively conservative and can have more bounding volume than oriented bounding boxes (OBBs), they are simpler and faster to use in collision detection. |
@ -1 +1,3 @@ |
||||
# Bounding volume |
||||
# Bounding Volume |
||||
|
||||
`Bounding Volume` is a simple shape that fully encompasses a more complex game model. It is less expensive to check for the intersection of bounding volumes when compared to checking for intersection of the actual models. Some commonly used types of bounding volume in game development include Axis-Aligned Bounding Boxes (AABBs), Bounding Spheres, and Oriented Bounding Boxes (OBBs). AABBs and Bounding Spheres are simple to implement and work well with static objects, while OBBs are slightly more complex and are often used with dynamic objects that need to rotate. |
@ -1 +1,3 @@ |
||||
# Obb |
||||
# OBB |
||||
|
||||
`Oriented Bounding Box (OBB)` is a type of bounding volume used in computer graphics and computational geometry. It is often used to simplify complex geometric objects by correlating them as a box much closer in size and orientation to the actual object. Unlike the `Axis-Aligned Bounding Box (AABB)`, the `OBB` is not constrained to align with the axis, so the box can be rotated. This orientation is usually chosen based on the object's local coordinate system, so the `OBB` maintains its rotation. Properties of an `OBB` include its center, dimensions, and orientation. However, it is worth noting that `OBBs` can be more computationally intensive than `AABBs` due to mathematical complexity. |
@ -1 +1,3 @@ |
||||
# Broad phase |
||||
# Broad Phase |
||||
|
||||
**Broad Phase Collision Detection** is the first step in the collision detection process. Its primary function is to identify which pairs of objects might potentially collide. Rather than examining the entire body of every object for possible collision, it wraps up each one in a simpler shape like a bounding box or sphere, aiming to reduce the number of calculations. The output of this phase is a list of 'candidate pairs' which are passed onto the next phase, often referred to as the narrow phase, for in-depth overlap checks. |
@ -1 +1,3 @@ |
||||
# Dbvt |
||||
# DBVT |
||||
|
||||
`DBVT` or `Dynamic Bounding Volume Tree` is an acceleration data structure that's primarily used in physics simulations like collision detection. It's a type of BVH (`Bounding Volume Hierarchy`), but the unique aspect of a DBVT is its handling of dynamic objects. As the name suggests, it's specifically designed to efficiently handle changing scenarios, such as objects moving or environments evolving, better than a typical BVH. Unlike a static BVH, a DBVT dynamically updates the tree as objects move, maintaining efficiency of collision queries. It primarily does this through tree rotations and refitting bounding volumes rather than fully rebuilding the tree. This makes DBVT a highly appealing option for scenarios with considerable dynamics. |
@ -1 +1,3 @@ |
||||
# Bvh |
||||
# BVH |
||||
|
||||
BVH, or Bounding Volume Hierarchy, is an algorithm used in 3D computer graphics to speed up the rendering process. It organizes the geometry in a hierarchical structure where each node in the tree represents a bounding volume (a volume enclosing or containing one or more geometric objects). The root node of the BVH contains all other nodes or geometric objects, its child nodes represent a partition of the space, and the leaf nodes are often individual geometric objects. The main objective of using BVH is to quickly exclude large portions of the scene from the rendering process, to reduce the computational load of evaluating every single object in the scene individually. |
@ -1 +1,3 @@ |
||||
# Spatial partitioning |
||||
# Spatial Partitioning |
||||
|
||||
"Spatial partitioning" is a technique used in computational geometry, intended to make calculations involving objects in space more efficient. It involves dividing a large virtual space into a series of smaller spaces, or "partitions". These partitions can be used to quickly eliminate areas that are irrelevant to a particular calculation or query, thus lowering the overall computational cost. This technique is widely used in game development in contexts such as collision detection, rendering, pathfinding, and more. Various methods exist for spatial partitioning, including grid-based, tree-based (like Quadtree and Octree), and space-filling curve (like Z-order or Hilbert curve) approaches. |
@ -1 +1,3 @@ |
||||
# Sort and sweep |
||||
# Sort and Sweep |
||||
|
||||
**Sort and Sweep** is an algorithm used in collision detection in game development which optimizes the process of identifying potential intersecting objects. Here's how it works: first, all objects in the game are sorted along a specific axis (typically the 'x' axis). Then a line (known as the 'sweep line') is moved along this axis. As the line sweeps over the scene, any objects that cross this line are added to an 'active' list. When an object no longer intersects with the sweep line, it's removed from this list. The only objects checked for intersection are those within this 'active' list reducing the number of checks required. This makes sort and sweep an efficient spatial partitioning strategy. |
@ -1 +1,3 @@ |
||||
# Collision detection |
||||
# Collision Detection |
||||
|
||||
**Collision Detection** is a critical aspect in game physics that handles the computer’s ability to calculate and respond when two or more objects come into contact in a game environment. This is vital to ensure objects interact realistically, don't pass through each other, and impact the game world in intended ways. Techniques for collision detection can vary based on the complexity required by the game. Simple methods may involve bounding boxes or spheres that encapsulate objects. When these spheres or boxes overlap, a collision is assumed. More complex methods consider the object's shape and volume for precise detection. Several libraries and game engines offer built-in support for collision detection, making it easier for developers to implement in their games. |
@ -1 +1,3 @@ |
||||
# Game physics |
||||
# Game Physics |
||||
|
||||
_Game physics_ is an integral part of game development that simulates the laws of physics in a virtual environment. This simulation brings realism into the game by defining how objects move, interact, and react to collisions and forces. Game physics ranges from how a character jumps or moves in a 2D or 3D space, to more complex mechanics such as fluid dynamics or ragdoll physics. Two main types of game physics are 'arcade physics', which are simpler and more abstract; and 'realistic physics', attempting to fully recreate real-life physics interactions. Implementing game physics requires a combination of mathematical knowledge and programming skills to integrate physics engines like Unity's PhysX and Unreal Engine's built-in physics tool. |
@ -1 +1,3 @@ |
||||
# Godot |
||||
# Godot |
||||
|
||||
Godot is an open-source, multi-platform game engine that is known for being feature-rich and user-friendly. It is developed by hundreds of contributors from around the world and supports the creation of both 2D and 3D games. Godot uses its own scripting language, GDScript, which is similar to Python, but it also supports C# and visual scripting. It is equipped with a unique scene system and comes with a multitude of tools that can expedite the development process. Godot's design philosophy centers around flexibility, extensibility, and ease of use, providing a handy tool for both beginners and pros in game development. |
@ -1 +1,3 @@ |
||||
# Unreal engine |
||||
# Unreal Engine |
||||
|
||||
The **Unreal Engine** is a powerful game development engine created by Epic Games. Used by game developers worldwide, it supports the creation of high-quality games across multiple platforms such as iOS, Android, Windows, Mac, Xbox, and PlayStation. Unreal Engine is renowned for its photo-realistic rendering, dynamic physics and effects, robust multiplayer framework, and its flexible scripting system called Blueprint. The engine is also fully equipped with dedicated tools and functionalities for animation, AI, lighting, cinematography, and post-processing effects. The most recent version, Unreal Engine 5, introduces real-time Global Illumination and makes film-quality real-time graphics achievable. |
@ -1 +1,3 @@ |
||||
# Native |
||||
# Native |
||||
|
||||
You don't necessarily have to use tools like Unreal, Unity3d, or Godot to make games. You can also use native languages like C++ or Rust to make games. However, you will have to do a lot of work yourself, and you will have to learn a lot of things that are already done for you in game engines. |
@ -1 +1,3 @@ |
||||
# Unity 3d |
||||
# Unity 3D |
||||
|
||||
**Unity 3D** is a versatile, cross-platform game engine that supports the development of both 2D and 3D games. This game engine allows users to create a wide variety of games including AR, VR, Mobile, Consoles, and Computers. It provides a host of powerful features and tools, such as scripting, asset bundling, scene building, and simulation, to assist developers in creating interactive content. Unity 3D also boasts a large, active community that regularly contributes tutorials, scripts, assets, and more, making it a robust platform for all levels of game developers. |
@ -1 +1,3 @@ |
||||
# Game engine |
||||
# Game Engine |
||||
|
||||
A *Game Engine* is a software framework designed to facilitate the creation and development of video games. Developers use them to create games for consoles, mobile devices, and personal computers. The core functionality typically provided by a game engine includes a rendering engine ("renderer") for 2D or 3D graphics, a physics engine or collision detection (and collision response), sound, scripting, animation, artificial intelligence, networking, streaming, memory management, and a scene graph. Game Engines can save a significant amount of development time by providing these reusable components. However, they aren't one-size-fits-all solutions, as developers must still customize much of the code to fit their games' unique needs. Some popular game engines are Unity, Unreal Engine, and Godot. |
@ -1 +1,7 @@ |
||||
# C cpp |
||||
# C / C++ |
||||
|
||||
**C** and **C++ (commonly known as CPP)** are two of the most foundational high-level programming languages in computer science. **C** was developed in the 1970s and it is a procedural language, meaning it follows a step-by-step approach. Its fundamental principles include structured programming and lexical variable scope. |
||||
|
||||
On the other hand, **C++** follows the paradigm of both procedural and object-oriented programming. It was developed as an extension to C to add the concept of "classes" - a core feature of object-oriented programming. C++ enhances C by introducing new features like function overloading, exception handling, and templates. |
||||
|
||||
Both of these languages heavily influence modern game development, where they often serve as the backend for major game engines like Unreal. Game developers use these languages for tasks related to rendering graphics, compiling game logic, and optimizing performance. |
@ -1 +1,3 @@ |
||||
# Csharp |
||||
# C# |
||||
|
||||
**CSharp (C#)** is a modern, object-oriented programming language developed and maintained by Microsoft. It's primarily used for developing desktop applications and, more prominently, for Windows applications within the Microsoft.Net framework. However, the language is versatile and has a wide range of uses in web services, websites, enterprise software, and even mobile app development. C# is known for its simplicity, type-safety, and support for component-oriented software development. It's also been adopted by Unity, a widely used game engine, thus making it one of the preferred languages for game development. |
@ -1 +1,3 @@ |
||||
# Assembly |
||||
# Assembly |
||||
|
||||
**Assembly** is a low-level programming language, often used for direct hardware manipulation, real-time systems, and to write performance-critical code. It provides a strong correspondence between its instructions and the architecture's machine-code instructions, since it directly represents the specific commands of the computer's CPU structure. However, it's closer to machine language (binary code) than to human language, which makes it difficult to read and understand. The syntax varies greatly, which depends upon the CPU architecture for which it's designed, thus Assembly language written for one type of processor can't be used on another. Despite its complexity, time-intensive coding process and machine-specific nature, Assembly language is still utilized for speed optimization and hardware manipulation where high-level languages may not be sufficient. |
@ -1 +1,3 @@ |
||||
# Rust |
||||
# Rust |
||||
|
||||
**Rust** is a modern, open-source, multi-paradigm programming language designed for performance and safety, especially safe concurrency. It was initially designed by Mozilla Research as a language that can provide memory safety without garbage collection. Since then, it has gained popularity due to its features and performance that often compare favorably to languages like C++. Its rich type system and ownership model guarantee memory-safety and thread-safety while maintaining a high level of abstraction. Rust supports a mixture of imperative procedural, concurrent actor, object-oriented and pure functional styles. |
@ -1 +1,3 @@ |
||||
# Python |
||||
# Python |
||||
|
||||
Python is a popular high-level programming language that was designed by Guido van Rossum and published in 1991. It is preferred for its simplicity in learning and usage, making it a great choice for beginners. Python's design philosophy emphasizes code readability with its use of significant indentation. Its language constructs and object-oriented approach aim to help developers write clear, logical code for small and large-scale projects. Python is dynamically-typed and garbage-collected. Moreover, it supports multiple programming paradigms, including procedural, object-oriented, and functional programming. Python is often used for web development, software development, database operations, and machine learning. Although not typically used for game development, some game developers utilize Python for scripting and automating tasks. |
@ -1 +1,3 @@ |
||||
# Programming languages |
||||
# Programming Languages |
||||
|
||||
Programming languages are very crucial to game development as they are the backbone of game design and functionality. A variety of languages can be used, but some are more commonly preferred in the industry due to their robustness and efficiency. The most popular ones include C++, C#, and Java. **C++**, a high-level language primarily used for developing video games, is known for its speed and efficiency. **C#**, which was developed by Microsoft, is extensively used with the Unity game engine to develop multi-platform games. **Java** is well-established in the sector as well, and it often utilized in the development of Android games. It's pivotal for a game developer to select a language that aligns with the project's requirements and nature. Despite the programming language you choose, a deep understanding of its constructs, logic, and capabilities is required for successful game development. |
@ -1 +1,3 @@ |
||||
# Ray tracing |
||||
# Ray Tracing |
||||
|
||||
Ray tracing is a rendering technique in computer graphics that simulates the physical behavior of light. It generates images with a high degree of visual realism, as it captures shadows, reflections, and refracts light. Ray tracing follows the path of light backwards from the camera (eye) to the source (light object), calculating the color of each pixel in the image on the way. The color value calculation considers the object from which the ray has reflected or refracted, and the nature of the light source i.e. whether it's ambient, point or spot. Ray tracing algorithm handles effects that rasterization algorithms like scanline rendering and 'Z-buffer' find complex to handle. |
@ -1 +1,3 @@ |
||||
# Rasterization |
||||
# Rasterization |
||||
|
||||
In the realm of computer graphics, **Rasterization** refers to the process of converting the image data into a bitmap form, i.e., pixels or dots. It is predominantly used in 3D rendering where three-dimensional polygonal shapes are transformed into a two-dimensional image, possessing height, width, and color data. It is a scan-conversion process where vertices and primitives, upon being processed through the graphics pipeline, are mathematically converted into fragments. Every fragment finds its position in a raster grid. The process culminates in fragments becoming pixels in the frame buffer, the final rendered image you see on the screen. However, it's essential to note that rasterization does limit the image's resolution to the resolution of the device on which it is displayed. |
@ -1 +1,3 @@ |
||||
# Graphics pipeline |
||||
# Graphics Pipeline |
||||
|
||||
The **Graphics Pipeline**, also often referred to as the rendering pipeline, is a sequence of steps that a graphics system follows to convert a 3D model into a 2D image or view that can be displayed onto a screen. These steps typically include transformation, clipping, lighting, rasterization, shading, and other processes. Each step in the pipeline represents an operation that prepares or manipulates data to be used in downstream stages. The pipeline begins with a high-level description of a scene and ends with the final image rendered onto the screen. It is a primary concept in computer graphics that developers should learn as it can help in efficient rendering and high-quality visualization. |
@ -1 +1,3 @@ |
||||
# Sampling |
||||
# Sampling |
||||
|
||||
**Sampling** in computer graphics is a method used to convert a continuous mathematical function (image, signal, light and sound), into a discrete digital representation. The process is done by taking snapshots at regular intervals which are also known as samples, and it's this that gives us the concept of 'sampling'. Some common types of sampling techniques include: uniform sampling (evenly spaced samples), random sampling (samples taken at random intervals), and jittered sampling (a compromise between uniform and random sampling). The higher the sampling rate, the more accurately the original function can be reconstructed from the discrete samples. Effective sampling is a significant aspect of achieving realistic computer graphics. |
@ -1 +1,5 @@ |
||||
# Computer animation |
||||
# Computer Animation |
||||
|
||||
## Computer Animation |
||||
|
||||
Computer animation refers to the art of creating moving images via the use of computers. Increasingly, it's becoming a critical component in the game development industry. Essentially, it's divided into two categories, 2D animation and 3D animation. 2D animation, also referred to as vector animation, involves creation of images in a two-dimensional environment, including morphing, twining, and onion skinning. On the other hand, 3D animation, also known as CGI, involves moving objects and characters in a three-dimensional space. The animation process typically involves the creation of a mathematical representation of a three-dimensional object. This object is then manipulated within a virtual space by an animator to create the final animation. Software like Unity, Maya, and Blender are commonly used for computer animation in game development. |
@ -1 +1,3 @@ |
||||
# Color |
||||
# Color |
||||
|
||||
In the realm of computer graphics, color plays an integral role. It can be defined in various color models such as RGB (Red, Green, Blue), CYMK (Cyan, Yellow, Magenta, Black), and others. RGB is a color model that combines the primary colors (red, green, blue) in different amounts to produce a spectrum of colors. This model is often used in digital displays. In contrast, CMYK is a color model used in color printing. It uses cyan, magyenta, yellow, and black as the primary colors. HSL (Hue, Saturation, Lightness) and HSV (Hue, Saturation, Value) are other useful models that represent colors based on human perceptions. Another important element of color in computer graphics is the color depth, also known as bit depth, which determines the number of colors that can be displayed at once. |
@ -1 +1,3 @@ |
||||
# Visual perception |
||||
# Visual Perception |
||||
|
||||
Visual Perception is a fundamental aspect of game development, widely explored within the field of computer graphics. It involves the ability to interpret and understand the visual information that our eyes receive, essential to create immersive and dynamic visual experiences in games. The study involves the understanding of light, color, shape, form, depth, and motion, among others, which are key elements to create aesthetically pleasing and engaging graphics. Making full use of visual perception allows the game developers to control and manipulate how the gamers interact with and experience the game world, significantly enhancing not only the visual appeal but also the overall gameplay. |
||||
|
@ -1 +1,3 @@ |
||||
# Tone reproduction |
||||
# Tone Reproduction |
||||
|
||||
`Tone Reproduction` or `Tone Mapping` is the technique used in computer graphics to simulate the appearance of high-dynamic-range images in media with a more limited dynamic range. Print-outs, CRT, LCD monitors, and other displays can only reproduce a reduced dynamic range. This technique is widely used in gaming development, where developers employ it to improve the visual experience. The process involves taking light from a scene and mapping it to a smaller range of tones while preserving the visual appearance—i.e., regarding brightness, saturation, and hue. There are various tone mapping algorithms available, each with unique attributes suitable for different imaging tasks. |
@ -1 +1,3 @@ |
||||
# Render equation |
||||
# Rendering Equation |
||||
|
||||
The **Render Equation**, also known as the **Rendering Equation**, is a fundamental principle in computer graphics that serves as the basis for most advanced lighting algorithms today. First introduced by James Kajiya in 1986, it defines how light interacts with physical objects in a given environment. The equation tries to simulate light's behavior, taking into account aspects such as transmission, absorption, scattering, and emission. The equation can be computationally intensive to solve accurately. It's worth mentioning, however, that many methods have been developed to approximate and solve it, allowing the production of highly realistic images in computer graphics. |
@ -1 +1,3 @@ |
||||
# Diffuse |
||||
# Diffuse |
||||
|
||||
**Diffuse** shading is one of the fundamental aspects within a game's graphics system. It is a property of light that allows it to scatter in an infinite number of directions after striking a surface, resulting in a soft, washed-out, and non-specular appearance. This type of reflection is visible from all angles regardless of the viewer's perspective, giving objects in video games a more realistic, three-dimensional look. It's essential for modeling the way light hits flat, matte, or non-shiny surfaces like cloth or rough stone. Factors such as the angle of incidence and the light's intensity do influence the brightness of the diffuse reflection. |
@ -1 +1,3 @@ |
||||
# Reflection |
||||
# Reflection |
||||
|
||||
Reflection in game development, specifically in shaders, is a phenomena that simulates the bouncing off of light from objects similar to the way it happens in the real world. Shaders replicate this effect by emitting rays from the lighting source against the object's surface. When the ray strikes the surface, it will calculate the light’s color and angle to define how light should reflect off that surface. Reflection in shaders can further be classified into two types: Specular Reflection and Diffuse Reflection. Specular Reflection is the mirror-like reflection of light from a surface, where each incident ray is reflected with the light ray reflected at an equal but opposite angle. Diffuse Reflection, on the other hand, is the reflection of light into many directions, giving a softer effect. These reflections are quantified in computer graphics often using a reflection model such as the Phong reflection model or the Lambertian reflectance model. |
@ -1 +1,3 @@ |
||||
# Specular |
||||
# Specular |
||||
|
||||
Specular reflection, often referred to simply as "specularity", pertains to the glossiness of a gaming object's surface. It represents the mirror-like reflection of light from the surface, providing that shiny, polished appearance on the objects. The specular value details how much of the light hitting the object is reflected directly into the viewer's eyes, creating a bright, shiny spot. This reflective value can be fine-tuned using the specular color and intensity settings to match specific object properties – like the reflectiveness of plastic versus metal. In practice, this is often used to simulate the subtle reflections from the rough surface, offering a greater sense of realism in the game's visual representation. |
@ -1 +1,3 @@ |
||||
# Bump |
||||
# Bump |
||||
|
||||
Bump mapping is a technique in computer graphics for simulating bumps and wrinkles on the surface of an object. This is achieved by modifying the surface normals of the object and using the modified normals during lighting calculations. The result is an apparently bumpy surface rather than a smooth surface, despite the surface geometry being unchanged. Normal maps, which are a type of bump map, store the perturbations of the surface normals in an RGB image. When applied to a model, they can greatly enhance the level of perceived detail without increasing the polygon count. To emphasize, bump mapping doesn't change the geometry of the model, only the lighting calculations across its surface. |
@ -1 +1,3 @@ |
||||
# Horizon |
||||
# Horizon |
||||
|
||||
In the context of game development, "horizon" is typically referred to as the farthest point visible in a game's terrain, map, or landscape. It's where the sky meets the ground from the player's perspective. The manipulation of the horizon can greatly influence the immersion and realism of a game world. For instance, developers often use techniques like "Horizon Mapping" or "SkyBox" to visually represent the horizon and far-off scenery. A detailed and well-designed horizon can add vastness into the world, even if the playable area is limited. However, the horizon also poses performance considerations, as rendering vast landscapes can lead to extensive processing demands and memory consumption. Therefore, techniques like fogging, level of detail (LOD) reduction, and horizon occlusion are often used to manage the performance. |
@ -1 +1,7 @@ |
||||
# Mapping |
||||
# Mapping |
||||
|
||||
"Mapping" in game development, especially in the context of shaders, predominantly refers to Texture Mapping and Normal Mapping. |
||||
|
||||
- **Texture Mapping**: This is the application of a texture (an image or colour data) onto a 3D model's surface. It's a process of defining how a 2D surface wraps around a 3D model or the way that a flat image is stretched across a model's surface to paint its appearance. This could be anything from the colour of objects to their roughness or reflectivity. |
||||
|
||||
- **Normal Mapping**: This is a technique used to create the illusion of complexity in the surface of a 3D model without adding any additional geometry. A Normal Map is a special kind of texture that allows the addition of surface details, such as bumps, grooves, and scratches which catch the light as if they are represented by real geometry, making a low-polygon model appear as a much more complex shape. |
@ -1 +1,3 @@ |
||||
# Parallax |
||||
# Parallax |
||||
|
||||
Parallax is a powerful technique employed in game development to establish depth in 2D games. The term 'Parallax' comes from the Greek word 'parallaxis', which means alteration. In game development, parallax creates an illusion of depth by making background images move slower compared to the foreground images when the player moves. This is due to the phenomenon where objects that are farther away seem to move at a slower speed compared to closer ones. There are different types of parallax techniques like the traditional parallax scrolling, multi-layered parallax, and parallax mapping. Parallax Mapping, also known as offset mapping or virtual displacement mapping, is a method used to fake details on a surface to give the illusion of depth or surface irregularities. |
@ -1 +1,4 @@ |
||||
# Texture |
||||
# Texture |
||||
|
||||
|
||||
**Texture** refers to the 2D artwork added on a 3D model to give it a convincing and detailed appearance in video games. These textures can represent various properties like color, reflectivity, light absorption, transparency, etc., depending on the needs of the game. Textures can be created through various methods like drawing, painting, or photography and then manipulated digitally. Depending on the topological structure of the 3D model, it can be directly textured (in case of UV maps) or through procedural methods. Different types of textures like albedo/diffuse, specular, normal/bump, displacement are used to achieve different visual effects. The texturing process is a crucial step in game development as it greatly enhances the realism and appeal of the 3D environment and characters. |
||||
|
@ -1 +1,3 @@ |
||||
# Shader |
||||
# Shader |
||||
|
||||
Shaders are a type of software used in 3D computer graphics. They are utilized to render quality visual effects by making calculations and transformations on image data. Also, a shader is responsible for determining the final color of an object. There are several types of shaders: vertex shaders, geometry shaders, pixel shaders, and compute shaders. Each of these is programmed to manipulate specific attributes of an image, such as its vertices, pixels, and overall geometry. They are essential tools for game developers aiming to produce realistic and engaging visual experiences. |
@ -1 +1,3 @@ |
||||
# Stencil shadow |
||||
# Stencil Shadow |
||||
|
||||
`Stencil shadows` are a technique used in 3D computer graphics for creating shadows. The stencil shadow algorithm operates by treating a shadow as a 3D volume of space, known as a shadow volume. Any part of the scene that lies inside this shadow volume will be in shadow. If it lies outside the shadow volume, it will be in light. The shadow volume is created by extruding the polygonal silhouette of a 3D object into space along the lines of sight from the light source. For equivalent complex objects, the number of edges or vertices to fill the stencil buffer will generally be less than the number of pixels needed to compute shadow maps, making stencil shadows more efficient in that regard. However, the shadows produced by this technique can look blocky or unrealistic if not further refined. |
@ -1 +1,3 @@ |
||||
# 2d |
||||
# 2D |
||||
|
||||
2D, or two-dimensional, refers to games or elements that are designed on a two-dimensional plane. It's a classic method in game development and it's primarily used for platformers, puzzles, RPGs (Role-playing Games), and arcade games. In 2D games, the graphics are typically a bit simpler since they only deal with height and width, disregarding depth. Examples of graphical assets used in 2D game development are sprites and tilemaps. For coding and physics, it uses simpler mathematics compared to 3D. Some of the popular 2D game development engines are `Unity 2D`, `Godot`, and `GameMaker Studio 2`. These engines simplify the process of developing 2D games, providing tools and features such as 2D physics, sprite manipulation, and AI (Artificial Intelligence) pathfinding specific for two-dimensions. |
@ -1 +1,3 @@ |
||||
# Cascaded |
||||
# Cascaded |
||||
|
||||
"Cascaded" refers to the Cascaded Shadow Maps (CSM) technique implemented in graphical computations. It involves the procedure of dividing the view frustum, the portion of a 3D space visualized on the screen, into several sub-frustums or "cascades". Each cascade corresponds to a different shadow map, allowing various levels of details for shadows in a single render. Each cascade uses a different region of the shadow map texture, facilitating the ability to provide finer shadow detail close to the camera and coarser detail as the distance from the camera increases. This technique helps in the efficient utilization of shadow map resolution and improves visual quality by reducing aliasing artifacts in the distance. |
@ -1 +1,3 @@ |
||||
# Cube |
||||
# Cube |
||||
|
||||
A **Cube** is a three-dimensional geometric figure known for its symmetric and box-like shape. It is also characterized by its equal length, width, and height dimensions. In game development, cubes can be the starting point for creating more complex 3D models. They are utilized in numerous ways such as creating physical objects, defining environments, constructing characters, and more. Furthermore, in the context of a shadow map, a cube map can be generated to deal with omnidirectional light sources. Cube mapping, a process that uses a six-sided cube as the map shape, is particularly useful for creating reflections and applying textures on 3D models. |
@ -1 +1,7 @@ |
||||
# Shadow map |
||||
# Shadow Map |
||||
|
||||
Shadow mapping is a technique used in computer graphics to add shadows to a scene. This process involves two steps - generating the shadow map and then rendering the scene. |
||||
|
||||
In the shadow map generating step, the scene is rendered from the perspective of the light source capturing depth information. This results in a texture that stores the distance from the light to the nearest surface along each light direction, a “shadow map”. |
||||
|
||||
In the scene rendering step, the scene is rendered from the camera’s perspective. For each visible surface point, its distance from the light is calculated and compared to the corresponding stored distance in the shadow map. If the point's distance is greater than the stored distance, the point is in shadow; otherwise, it's lit. This information is used to adjust the color of the point, producing the shadow effect. |
||||
|
@ -1 +1,3 @@ |
||||
# Directional light |
||||
# Directional Light |
||||
|
||||
`Directional light` is a type of light commonly utilized in 3D game development. As the name suggests, this form of light appears to be coming from a specific direction, much as sunlight does in reality. In actuality, it extends infinitely in a single direction and doesn't emanate from a specific source like point or spotlights do. This property allows it to illuminate all objects within a scene uniformly. Directional light is particularly useful for replicating large and distant light sources such as the sun or moon. |
@ -1 +1,3 @@ |
||||
# Light source |
||||
# Light Source |
||||
|
||||
In game development, a **light source** is a critical component that impacts the visual appeal and realism of the scene. It represents any object in the game scene that emits light, such as the sun, a lamp, or a torch. Light sources can be categorized as static or dynamic. Static light sources do not move or change throughout the game, while dynamic light sources can move and their properties can change in real-time. The properties of light sources that can be manipulated include intensity (how bright the light is), color, range (how far the light extends), direction, and type (point, directional, or spot). The lighting and shading effects are then computed based on these light source properties and how they interact with various objects in the game scene. |
@ -1 +1,3 @@ |
||||
# Infinite light |
||||
# Infinite Light |
||||
|
||||
`Infinite light` in game development refers to a type of light source that emits light rays in parallel. This source is assumed to be located at an infinite distance away, hence the term 'infinite light'. Each ray of light coming from the source is regarded as a straight line. This is especially useful for simulating sunlight or far off light sources in outdoor scenes as the light rays from these sources, when received on earth, can be safely assumed to be parallel to each other. However, keep in mind that infinite light does not produce any localized lighting effects or shadows, as it dispenses even lighting throughout your scene. |
@ -1 +1,3 @@ |
||||
# Point light |
||||
# Point Light |
||||
|
||||
A `Point Light` is a common light source within game development. It simulates a light radiating from a single point equally in all directions, like a light bulb in a room. Because it emits light in all directions, a point light affects every object, regardless of its orientation towards the light source. Additionally, a point light has a location in space and no directional vector, unlike a directional or spot light. It's worth noting that although point lights have an associated range or radius beyond which their intensity is zero, they can consume more computation resources compared to other types of light sources due to their influence over a larger area of the scene. Hence, careful planning is required when using point lights. |
@ -1 +1,3 @@ |
||||
# Spot light |
||||
# Spot Light |
||||
|
||||
A **Spot Light** is a type of light source used in game development, often utilized to create focused, directional lighting within a specific radius, imitating real-world sources like a flashlight or a stage spotlight. The two primary properties of a spot light are its cone angle and its fall-off. The cone angle determines the size of the illuminated area, while the fall-off controls how quickly the light diminishes towards the edges of the light cone. Spotlights can create dramatic effects and are essential in driving attention towards specific game elements or areas due to their constrained, targeted lighting. |
@ -1 +1,3 @@ |
||||
# Lightning and shadow |
||||
# Lighting and Shadow |
||||
|
||||
**Lighting and Shadows** are paramount elements in computer graphics, significantly contributing to the visual realism of a game. They create depth and a sense of a three-dimensional space in a two-dimensional display. **Lighting** in game development mimics real-world light properties. It involves calculating how light interacts with different objects and surfaces based on their material characteristics and the light's intensity, direction, and color. Various algorithms, like Ray Tracing or Rasterization, are used to simulate these interactions. On the other hand, **shadows** are the areas unlit due to the blockage of light by an object. Producing realistic shadows involves complex computations, factoring in the light's position, the blocking object's shape and size, and the affected area's distance. Shadow Mapping and Shadow Volume are common techniques for creating shadows in game development. Special attention to these aspects can dramatically increase the perceived realism and immersion in the game environment. |
@ -1 +1,3 @@ |
||||
# Fog |
||||
# Fog |
||||
|
||||
In the framework of game development, **fog** is a visual technique applied effectively for various artistic and optimization purposes. Aesthetically, it's used to simulate different atmospheric effects such as smoke, fog, mist, and dust. Fog can also be utilized to conceal or lessen the details of distant objects, hence reducing the rendering load on the system. This technique is often called "distance fog". Moreover, specialized types of fog like "volumetric fog" add a three-dimensional feel to the game, making the lighting atmosphere more immersive and realistic. Note that fog settings and effects can be adjusted based on different game engines, such as Unreal Engine, Unity, or Godot. |
@ -1 +1,3 @@ |
||||
# Occluder |
||||
# Occluder |
||||
|
||||
An **Occluder** in game development is basically a tool or method used to hide other objects in the game environment. When a certain object, which is known as the occluder, blocks the line of sight to another object from the camera's perspective, the hidden or blocked object does not need to be rendered. This object could be anything from a building to a terrain feature. The process of managing these occluders is known as occlusion culling. The purpose of using occluders is to optimize the game and improve its performance by reducing unnecessary rendering workload. However, it's important to note that setting up occluders requires careful planning to ensure that it does not impact the gameplay or visual quality. |
@ -1 +1,3 @@ |
||||
# Frustum |
||||
# Frustum |
||||
|
||||
`Frustum` is a term commonly used in the game development industry and is intensely associated with the concept of "culling". It is the field of view of the camera, or more specifically, the portion of the world that is currently visible to your camera in the game. Shaped like a truncated pyramid (or a pyramid with its top cut off), the frustum's small end is where your camera sits, and the larger end is far away from the camera stretching outwards. Objects within this frustum are what the player sees on their screen, and ones outside are not rendered, which helps increase the performance of the game. Frustum culling, thus, is a computational process to determine which objects are within the frustum and should be drawn. |
@ -1 +1,3 @@ |
||||
# Culling |
||||
# Culling |
||||
|
||||
**Culling** is a performance optimization strategy employed in game development to improve efficiency and speed. *Culling* helps in reducing the rendering workload by eliminating the elements that are not visible to the player or are outside the viewport of the game. There are several types of culling, two main being; **frustum culling** and **occlusion culling**. Frustum culling involves eliminating objects that are outside of the camera's field of view. On the other hand, Occlusion culling discards objects that are hidden or blocked by other objects. Culling ensures that only the elements that are necessary or add value to the player's experience are processed. |
@ -1 +1,3 @@ |
||||
# Light |
||||
# Light |
||||
|
||||
Lighting in game development is crucial for creating an immersive and realistic gaming experience. There are several types of light sources, including directional light, point light, and spotlights. Directional light simulates sun or moonlight with parallel rays illuminating the game world. Point light emanates from a single point in all directions, similar to a light bulb. Spotlights produce a cone of light, similar to a flashlight or a stage spotlight. Then, there's Ambient light which creates a base level of light that hits every surface equally, regardless of its orientation or position, making sure no area is ever in complete darkness. These different sources of light can be manipulated to create the desired mood and aesthetic in a scene. |
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue