Color Palette Generator

Extract dominant colors from any image. Export as CSS variables, Tailwind config, SCSS, JSON, or Figma tokens.

100% local — your image never leaves your device.

Drop an image or click to upload

JPG, PNG, WEBP — any photo or artwork

Runs entirely in your browser. No uploads. Your files stay private.

How The Palette Generator Picks Colours From An Image

Color Palette Generator pulls representative colours out of an image using a median-cut algorithm — the same technique GIMP and Photoshop use to build their indexed-colour exports. The image is decoded into a Canvas, getImageData returns the raw RGBA pixel buffer, and the buffer is recursively split along its longest colour axis until the requested number of buckets remain. Each bucket is averaged to produce one swatch.
The pipeline is deliberately simple: there is no Web Worker and no WebAssembly. For typical photos under 12 megapixels, generating a 5-10 colour palette runs in well under a second on a phone. Very large images (for instance, a 50 MP camera RAW exported as JPEG) are downsampled before quantisation so the algorithm stays interactive — sampling every Nth pixel is usually indistinguishable from sampling them all when you only need 8 buckets.
Each extracted swatch is converted into HEX, RGB, and HSL forms in the browser. HSL is computed from the RGB triplet using the standard CSS Color Module 4 formula, which makes it easy to spot near-greys (low saturation) or to manually tweak hue and lightness in your design tool of choice without losing the source colour.
Export covers the formats designers and developers actually use day-to-day: copy each swatch to the clipboard, or export the whole palette as CSS custom properties, SCSS variables, a Tailwind config snippet, Figma/W3C design tokens, JSON, or a self-contained SVG swatch sheet. The SVG export is handy for sharing a palette with a client because it renders the same way in every viewer.
An EyeDropper API mode is also available on browsers that support it (Chrome, Edge, and Opera 95 and newer). When activated, the system colour picker takes over and you can sample any pixel on screen — even outside the browser window — to add it manually to the palette. Firefox and Safari fall back to in-image sampling.
All of this happens client-side. The image is loaded into an HTMLImageElement via a blob URL, drawn into an off-screen Canvas, sampled, and discarded when you reset. There is no upload, no analytics on the image content, and no server-side cache.
Median-cut is fast and stable but it is not perceptually weighted, so two photos with similar overall colour distributions can produce slightly different palettes if they have different image sizes. For colour-accurate brand work, treat the output as a starting point and nudge the swatches in a colour tool that operates in OKLCH or LAB.

Common Use Cases

01

Brand identity from photography

Pull a cohesive 5-7 colour palette from a curated brand photo and lock it in as the source of truth for UI, print, and merch.

02

Website theme bootstrap

Generate CSS custom properties from a hero image and paste them straight into a Tailwind theme or shadcn token file to scaffold a new site.

03

Figma design token export

Export W3C design tokens JSON and import it into Figma's Tokens Studio plugin so your tokens stay in sync with the source image.

04

Slide deck colour matching

Extract colours from a product screenshot and apply them to your Keynote or Google Slides theme so the deck and the product feel like the same family.

Frequently Asked Questions

A standard median-cut quantiser. Pixels are recursively split along the channel with the largest range until the requested number of buckets remain, then each bucket is averaged. It is the same family of algorithm used to build palettised GIFs and indexed PNGs.
Between 2 and 16. Going higher than 16 tends to produce near-duplicate swatches that are hard to distinguish, which is why the slider stops there.
Median-cut depends on the order in which buckets are split. Tiny differences in how the image is downsampled between sessions can swap two perceptually similar colours. Re-running with a different swatch count usually settles on a stable set.
When available, the EyeDropper API returns the exact sRGB hex value of the pixel under your cursor, including pixels outside the browser. It is supported in Chromium browsers (Chrome, Edge, Opera, Brave) version 95 and newer; Firefox and Safari fall back to in-image sampling.
Per-swatch copy in HEX, RGB, or HSL, plus full-palette exports as CSS custom properties, SCSS variables, a Tailwind config snippet, JSON, Figma/W3C design tokens JSON, and a self-contained SVG swatch sheet.
No. The image lives as a blob URL in your tab, is drawn into a Canvas to read pixels, and is discarded when you reset. There is no network call that carries pixel data.
Yes. Fully transparent pixels are skipped during sampling so the palette reflects the visible image, not the background. Semi-transparent pixels are alpha-composited onto white before being counted.
Not directly in this tool. Copy the HEX value into a colour editor like the CSS Gradient tool, the OKLCH picker built into Chrome DevTools, or your design app of choice and tweak from there.
There is no hard limit, but the image is decoded into a full-resolution Canvas first. On mobile devices, photos above roughly 8000 px on the long edge can hit the browser's Canvas size cap. Resize first with Image Resizer if that happens.
No. Median-cut works in sRGB, not OKLCH or LAB. Two perceptually similar colours can end up in different buckets if their RGB values are far apart, and vice versa. For perceptually weighted clustering, run the output through a tool that operates in LAB.

Advertisement