Hey there, here’s a more sarcastic take on the text:
KraftShade 2: Mastering Shaders with KraftShade — Your GPU Canvas in a Joke
Hey there, graphics enthusiasts! Have you been following the latest advancements in GPU-powered Android graphics and the integration of declarative pipelines with ShaderToy? If so, you might have noticed that the previous post we shared about KraftShade’s capabilities in Android graphics and the Jetpack Compose integration was quite a doozy. But hey, it’s a game of two halves! Today, we’re diving into the nitty-gritty of shader programming and the absurd world of KraftShade, but with a twist: we’re going to make it a joke.
First up, let’s talk about shaders. At their core, shaders are a tiny program that runs directly on your device’s GPU, designed to perform highly parallel and efficient visual computations. They accept various inputs, such as textures, color, or positional data, and process them to produce an output, typically a color value for each pixel. This output is then rendered to a frame buffer, which can be an intermediate buffer or the final display
KraftShade 2: Mastering Shaders with KraftShade — Your Canvas on the GPU
Following our previous post, which highlighted KraftShade’s power in Android graphics and demonstrated its Jetpack Compose integration via a declarative pipeline DSL, this entry will reveal the fundamental shader mechanics that form your GPU canvas. You’ll learn step-by-step how to craft your own unique visual effects using KraftShade’s shader system.

What is a Shader?
At its core, a shader is a small program that runs directly on your device’s GPU, designed for highly parallel and efficient visual computations.
Shaders accept various inputs (such as textures, color, or positional data) and process them to produce an output, typically a color value for each pixel. This output is then rendered to a frame buffer, which can be an intermediate buffer or the final display surface, ultimately being shown on the screen. In shader code, this final output is often represented by a special variable like gl_FragColor

Since shaders execute on the GPU, they use a specialized programming language called GLSL (OpenGL Shading Language). This language is specifically designed for parallel execution, which is crucial for visual computation.
For example, a simple shader program in GLSL might look like this:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform lowp float brightness;
void main()
{
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
gl_FragColor = vec4((textureColor.rgb + vec3(brightness)), textureColor.w);
}
Vertex Shader and Fragment Shader
The example we’ve just shown is a fragment shader that applies a brightness effect to the input imageTexture (think of it as processing an input image). When programming with OpenGL, you typically write two stages of shaders and send them to the GPU for execution: the Vertex Shader and the Fragment Shader.
The vertex shader defines the positions of vertices that determine what geometry should be drawn. In the simplest case, we pass four corner coordinates to create a rectangle that covers the entire screen: top-left (-1, +1), top-right (+1, +1), bottom-left (-1, -1), and bottom-right (+1, -1). The vertex shader processes these coordinates and passes the transformed positions to the next stage in the pipeline: the fragment shader.
OpenGL uses a normalized coordinate system where the visible area ranges from -1 to +1 in all three dimensions (x, y, z). When you specify vertices within this range, they fill the entire screen. For example, if you set vertices at [(0, +1), (+1, +1), (0, 0), (+1, 0)], the rendered result would only appear in the top-right quarter of the screen, since these coordinates only cover that portion of the normalized space.
Once the vertex shader defines the geometry (in our case, a rectangle covering the screen from -1 to +1 in both width and height), the GPU determines how many pixels need to be drawn. If the target resolution is 100×100 pixels, then all 10,000 pixels within that rectangle need to be rendered. This is where the fragment shader comes into play — it calculates the color for each individual pixel that needs to be drawn.

This explanation is a simplified version of how the GPU rendering pipeline works, and there are many additional details involved. However, if you’re a beginner who wants to start having fun with OpenGL programming as quickly as possible, this should provide enough foundational knowledge to get you started.
Tools
OpenGL programming can be challenging and frustrating due to complex environment setup. These handy tools will significantly streamline your development process — try them before building everything from scratch!
ShaderToy
This web-based platform lets you write shader code and see results instantly, making it incredibly valuable for rapid prototyping. The immediate feedback loop is much faster than developing directly on Android. We’ll use this site to build a simple effect in upcoming examples.
Homepage: https://www.shadertoy.com/

GLSL intelliJ plugin
Since OpenGL functions accept shader code as string parameters that get compiled on the GPU, reading and debugging shader code becomes difficult without proper syntax highlighting. This plugin solves this by providing syntax highlighting through the @Language(“GLSL”) annotation, plus code completion support for enhanced productivity.
Homepage: https://plugins.jetbrains.com/plugin/18470-glsl

Shader Programming with ShaderToy
Programming with shader is completely different compared to our most familiar language — Kotlin. To make it easier to learn, we can start building effects on top of the existing shader playground— ShaderToy. We don’t need to worry about how to set up opengl environment correctly for now, just play around it and get familiar with this the C-like language (GLSL), you will find that creating custom shader effects is quite fun.
Note that the shader made in ShaderToy are essentially Fragment shaders, we will also focus on Fragment shader in this article, since they take a more important role in making image effects. In contrast, vertex shader are mostly the same and can be reused.
Now goes to the home page, on the top-right corner there’s a button labeled “New” – click on it, and then you will get a simple example shows colorful gradient animation:

The mainImage on the right-hand side is the shader code that generate this animation result, it’s similar to other old language like C, requiring a semicolon at the end of each line. This function has two parameters, fragColor and fragCoord. From the keyword before the type (in, out), its easy to infer that the in represents input to this function, while out represents output.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
We can also see that fragColor’s type is vec4, and color represented by vec4 is a vector that has red, green, blue and alpha values. In short, you can remember it as (R, G, B, A).
Values in the shader word are normalized, so the RGB values here is not 0~255, but 0~1, as well as the coordinate system, the values would be between 0~1 or -1 ~1.
Now, let’s work on our very first experiment with ShaderToy! Modify the code in line 10 with following:
fragColor = vec4(1.0, 0.0, 0.0 ,1.0);
Guess what the result will looks like? Click the “▶️” button at the bottom to compile and you will see the result!

The screen now fill with red color! It’s quite straightforward, right? Next, we will going to play with it’s coordinate system: look at line 4 — there’s an variable called uv.
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
The comment also friendly tell us the range of this variable, it is from 0 to 1. You can think of it as the coordinate that we will preset to the screen, but how do we use it? We can use a simple if-else condition to test it out!
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
// Output to screen
if (uv.x > 0.5) {
fragColor = vec4(1.0, 0.0, 0.0 ,1.0);
} else {
fragColor = vec4(0.0, 1.0, 0.0 ,1.0);
}
vec2 have various of built-in properties, here we are using uv.x to access it x position from the variable uv. Before we explain what it does, let’s hit the “▶️” button first to see what the result will look like:

The result now separate into two parts, for the right — hand side (x > 0.5), the color is red, and for the left — hand side (x ≤ 0.5), the color is green. When you reading the code above, you might already guess we will got this result, but more importantly, this example shows how fragment shader code runs differently from the code we usually wrote.
If you want to run this function that draw something on the Android canvas by custom view, what values will you need to pass to the parameters? Smart as you are, you will realized that you will need to write a 2 — dimensional loop to call this function, first loop would be for the x — axis, the second loop would be for the y — axis. The computational complexity would be O(width* height)!
void mainImage(vec2 fragCoord)
{
vec2 uv = fragCoord/iResolution.xy;
if (uv.x > 0.5) {
fragColor = vec4(1.0, 0.0, 0.0 ,1.0);
} else {
fragColor = vec4(0.0, 1.0, 0.0 ,1.0);
}
}
for (i in 0..width) {
for (j in 0..height) {
mainImage(vec2(i, j))
}
}
In contract, GPU will execute this program(Shader program) for every pixel in parallel, making it very efficient for drawing colors on the screen.
Load Texture
Now we have the basic idea about how fragment shader works. The next thing we are going to try is working with the textures. At the very bottom of the website, you can see there is list of black blocks. Now click the first block labeled iChannel0.

After clicking, you will see a popup window. From this popup window, you can choose what kind of information you want to provide to iChannel0, then you will able to use the content of iChannel0 to draw. Now please click “Textures” tab and choose one you want to apply.
In order to use the content in the iChannel0 , you will need to use the texture function to get the pixel color from iChannel0 at position uv :
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
fragColor = texture(iChannel0, uv);
}

It looks easy for OpenGL expert, but how do I know there’s an function called texture , and how do I know the type of iChannel0 in order to build my custom effect?
Don’t worry, you have everything you need on this website. Look at the bottom right of this window — you will see there’s a “?” button right there. Just click it!
After clicking it, you will see a popup window titled “GLSL help”. This contains sufficient information for you to write Shader code here. In the Built-in Functions area, it provides all the basic built-in functions in GLSL. And in the Shadertoy Inputs area, it provides all the inputs and their types. So you can see that iChannel{i} is mentioned here and the type is sampler2D . After you know its type, you will easily find that there’s function texture in Built-in Functions that accepts sampler as its input.

GLSL Data types
Now it’s time to learn the data types of GLSL. For simplicity, we will only introduce the types we are using in the post: sampler and vec
sampler2D
In GLSL, sampler2D is a special uniform type that represents a 2D texture. It does not hold the texture data itself but acts as a handle that the shader can use to access a texture bound to a specific texture unit. To retrieve color or other data from the texture, you typically call a lookup function like texture2D (deprecated in modern GLSL, replaced by texture). For example:
uniform sampler2D tex;
vec4 color = texture(tex, uv); // uv is a vec2 in [0,1] range
Here, uv specifies the normalized coordinates for sampling the texture.
vec2, vec3, vec4
GLSL provides built-in vector types for compactly storing and operating on 2–4 float values:
- vec2 → two components (x, y)
- vec3 → three components (x, y, z)
- vec4 → four components (x, y, z, w)
They are commonly used for positions, directions, colors, and texture coordinates.
Example:
vec2 uv = vec2(0.5, 0.5);
vec3 color = vec3(1.0, 0.0, 0.0); // red
vec4 position = vec4(uv, 0.0, 1.0); // combine vec2 + scalars
You can also construct larger vectors from smaller ones, e.g.:
vec4 v = vec4(vec2(1.0, 2.0), vec2(3.0, 4.0));
Vectors also support swizzling, which means you can rearrange and access components with suffixes like .x, .y, .z, .w or .r, .g, .b, .a.
Basic Component Access
vec3 color = vec3(1.0, 0.5, 0.25);
float r = color.r; // 1.0
float g = color.g; // 0.5
float b = color.b; // 0.25
vec2 xy = color.xy; // (1.0, 0.5)
vec2 rg = color.rg; // (1.0, 0.5)
Reordering Components
vec3 color = vec3(1.0, 0.5, 0.25);
vec3 reversed = color.bgr; // (0.25, 0.5, 1.0)
vec2 swap = color.yx; // (0.5, 1.0)
Replicating Components
vec2 uv = vec2(0.3, 0.7);
vec3 repeat = uv.xxx; // (0.3, 0.3, 0.3)
vec4 mixed = uv.yyxx; // (0.7, 0.7, 0.3, 0.3)
GLSL Variables Outside main()
In GLSL, not all variables are declared inside the main() function. Some special variables are defined at the global scope and act as interfaces between:
- the CPU (application code) and shaders, or
- different shader stages (vertex → fragment).
These variables are the way external data flows into and between shaders:
- Attributes: fed by the CPU per vertex.
- Uniforms: constants set once by the CPU per draw call.
- Varyings: values passed from the vertex shader to the fragment shader, automatically interpolated across fragments.

In modern GLSL:
– Attribute → replaced by in (vertex shader).
– Varying → replaced by out (vertex shader) and in (fragment shader).
– Uniform → still uniform.
In kraftshade, we still use the old way.
Make grayscale filter
New you have all the basic knowledge to make a simple effect. A grayscale filter is one of the best choose for you to begin with. Now open ShaderToy and write the following code:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
vec4 col = texture(iChannel0, uv);
float grayColor = (col.r + col.g + col.b) / 3.;
fragColor = vec4(vec3(grayColor), 1.0);
}
The gray color calculation is very simple: just sum the RGB values and divide by 3, then every colored pixel become grayscale:

But since we are programming with the GPU, there are some optimizations we can do:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
vec4 col = texture(iChannel0, uv);
float grayColor = dot(col.rbg, vec3(0.33));
fragColor = vec4(vec3(grayColor), 1.0);
}
Using built-in dot() function can improve the performance since GPUs implement dot as a native instruction (1 cycle), while / 3.0 involves a division.
Have fun with ShaderToy
The most interesting part for me is that you can easily write animated fancy effects with ShaderToy. You can do it by utilizing the iTime variable:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
vec4 col = texture(iChannel0, vec2(cos(uv.x + iTime/4.), uv.y));
float grayColor = dot(col.rbg, vec3(0.33));
fragColor = vec4(vec3(grayColor), 1.0);
}
In the above code, I made some adjustment to how we get the color from position. Please copy and paste the code yourself and see the result in ShaderToy!
Grayscale filter in KraftShade
Finally we can come back to how to create effects in KraftShade, It’s super easy to create a basic, color-based shader by just extending our built-in class: TextureInputKraftShader , and overriding the method loadFragmentShader() . Here’s how GrayscaleKraftShader being implemented in our library:
class GrayscaleKraftShader : TextureInputKraftShader() {
override fun loadFragmentShader(): String = GRAYSCALE_FRAGMENT_SHADER
}
@Language("GLSL")
private const val GRAYSCALE_FRAGMENT_SHADER = """
precision highp float;
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
const highp vec3 W = vec3(0.2125, 0.7154, 0.0721);
void main()
{
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
float luminance = dot(textureColor.rgb, W);
gl_FragColor = vec4(vec3(luminance), textureColor.a);
}
"""
I believe almost everything are the same as what we’ve wrote on ShaderToy, but here it use a weighted vector so the grayscale effect would looks better. This weighted formula is based on human perception — our eyes are not equally sensitive to all colors. We perceive green as the brightest, red as moderately bright, and blue as the darkest.
Other Built-in Shaders in KraftShade
While learning to create custom shaders is valuable, KraftShade provides a rich set of built-in shaders that cover most common visual effects you’ll need in your Android applications. Here list some of our built-in shaders:
- SaturationKraftShader – Adjust color saturation from black-and-white to oversaturated
- BrightnessKraftShader – Control overall brightness levels
- HueKraftShader – Shift color hues for creative color grading
- ContrastKraftShader – Enhance or reduce image contrast
- MultiplyBlendKraftShader – Multiply blend mode for darkening effects
- ScreenBlendKraftShader – Screen blend for lightening effects
Resources & Next Steps
Get KraftShade
GitHub Repository: Explore the source code, contribute, or report issues
https://github.com/cardinalblue/android-kraft-shade
Documentation: Comprehensive guides, API references, and advanced examples
https://cardinalblue.github.io/android-kraft-shade/docs/intro
What’s Coming Next
In this post, we’ve covered the fundamentals of shader programming and demonstrated how to create your first visual effects using both ShaderToy for experimentation and KraftShade for Android implementation. We explored the basics of fragment shaders, GLSL data types, coordinate systems, and built our first grayscale filter effect.
If you are interested in how the KraftShader worked internally, please check out our official documentation here: https://cardinalblue.github.io/android-kraft-shade/docs/core-components/shader-system/kraft-shader
Next up, we’ll explore how KraftShade enables dynamic and animated effects through its powerful input system — learning how to create time-based animations, respond to user interactions, and build complex visual effects that change and evolve in real-time within your Android applications.
KraftShade 2: Mastering Shaders with KraftShade — Your Canvas on the GPU was originally published in PicCollage Tech Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.