I came across a site called inkscape.org that offers a free and open source vector graphics editor used for creating illustrations, icons, logos, diagrams, maps and web graphics, when I got this curious idea.
Something that I realized was when zooming in nothing ever gets pixelated, unlike when using images such as jpeg, bitmaps, or any pixel by pixel based image. This made me wonder if it would be possible to carry this same idea into an actual 3d world with vector based textures. No matter how close you get to an object, the texture would never look blurry. Instead of having 20 somethings mipmaps and a 8192x8192 texture or larger you just have 1 vector based texture that scales with distance automatically. On a texture with a bunch of small details based on vectors, you could just have them assigned to different groups that would be rendered based on distance. You could still tile them, and give them special properties, but I am not sure how much space these textures would take up.
I was just thinking as textures are getting very large and are starting to consume a fair amount of space. Thanks for reading
Those are some interesting thoughts. The lossless quality is why vector graphics are often a good solution where fine detail is required, especially in print media.
To some extent, there are vector graphics in gaming (anything to do with meshes or vertices, for instance; the edge of a wall is where it is, no matter how much you zoom in). However, why they are not so common for textures is that to describe a lifelike surface, it would takes a lot of mathematics to decode the vector instructions into a list of instructions of what colour to draw for each pixel on screen.
A ‘vector graphics’ game would effectively be a game that used super-high detail meshes (or even non-polygon meshes, like NURBS etc). I think this would work well with physically-based-rendering solutions, where your textures don’t describe so much how to draw something, but rather specifically what it is; especially if the materials are purely algorithmic and do not rely on bitmap data.
Certainly, using a vector source to generate your LODs as you describe could theoretically work; especially if once you got to the super-high levels of detail you made sure only to render to bitmap the parts that are visible. This would probably be quite complicated to implement.
Vector graphics are for print media, things like graphic design. They take more power to render, and for something like an actual texture (rather than a simple vector shape) it would require way too much to be able to work.
I’ve had a little success in the past using distance field textures for some things, which is pretty close to vector based stuff (you don’t need any special tech for basic stuff, as long as you can afford some uncompressed or grayscale textures for the distance fields). It was based on Valve’s paper. Most of the time I was trying to reduce the texture memory and found that for most things it ended up using the same or more due to the uncompressed requirement, but was still worth it in one-off cases where you got a better result. The engine already uses it for dominant shadow maps pretty well.
Think of an oval for example. Not a simple object for vectors if you look at it to hard, but say you take 4 points opposite to each other like top, bottom, left, and right. If you give each point a special variable that could use math to describe their relation to one another. Color wise just use fill with a colored gradient. You would just need to be able to decode the information into a picture.
I was just wondering if there was a direct way to get around storing any information in a standard picture format. I could see where super complex images wouldn’t benefit, but many textures don’t describe to much by themselves. It is usual many pieces put together to form something more complex.
It would only work for simple shapes, like Flash animations.
Not 100% true. Guilty Gear Xrd uses vector images, that way they can get high fidelity close ups of their characters (See how Guilty Gear Xrd's gorgeous 3D cel-shaded look was created - Polygon). It probably wouldn’t scale well, but it works great for a fighter game.
A cell-shaded style is not complex and would work well with vector images since it would use less memory than a raster image. The original post was suggesting something complex like taking a typical texture image and converting it to vector (which is technically possible) it will give a result that would take more memory and would run poorly.
Xrd doesn’t use vector textures, they are normal raster textures. The inner contour lines are perfectly aligned either horizontally or vertically so there’s no aliasing, which means you get a perfect result. Without aliasing (and thus without pixelation) even a raster image can look like a vector image.
Not for traditional gaming but it can prove useful for stuff like diagrams and/or where you only need one such texture present (eg bird eye view), and you don’t do much zooming in or out, you may switch camera views less frequently and it would save you re-generating the output and also do it all realtime in the engine without having to eye-ball the position in an external program.
Yeah I’m not talking about actual photorealistic texturing, it’s like mostly black-and-white with some color without gradients.
Example: Parker Solar Probe Position: http://parkersolarprobe.jhuapl.edu/w…01812_0041.svg
For diagram lines you would do it differently, it wouldn’t have anything to do with vector images, that would be something related to how viewport gizmos are rendered.