For reference the effect I'm going for is this:
I'm working in Processing 3, NOT p5.js.
I've looked around processing forums, but i can't find anything that works in the current version or doesn't use PGraphics and a mask which from what I've read can be expensive to use.
My current ideas and implementations have resulted to drawing shapes around the player and filling the gaps in with a circles with no fill that has a large stroke weight.
Does anyone know of any methods to easily and inexpensively draw a black background over everything except a small circular area?
If this is the wrong place to ask this question just send me on my way I guess, but please be nice. Thank you:)
You could create an image (or PGraphics) that consists of mostly black, with a transparent circle in it. This is called image masking or alpha compositing. Doing a Google image search for "alpha composite" returns a bunch of the images I'm talking about.
Anyway, after you have the image, it's just a matter of drawing it on top of your scene wherever the player is. You might also use the PImage#mask() function. More info can be found in the reference.
Related
I am currently working on a solution for an automatic image edit.
And I have used Canny Edge Detection and Closing.
But What I ultimately want to accomplish is to find all the rectangles from blueprint and fill them with an image what I have.
I want to know the process that I need to take, not the exact code or solution!
Please suggest me what steps should I take to accomplish it, thx.
Image that has the rectangles
Image that needs to go into all the rectangles
what i have done so far
2018-02-12 EDITTED(clean rectangle detected)
// I have done finding the rectangles and drawing lines over them, but the result is not really reliable than I expected(it draws line on rectangles those are not a parking space), and I do not know how to put an image on those rectangle instead of drawing line on them. please help me out!
P.S : Only in JAVA please !
In your case, you don't need Canny. Your image is really clean and edges are really visible already. A simple Threshold, will work better.
For rectangle detection take a look at this example (included with opencv) that uses findcontours and checks the angles: https://github.com/opencv/opencv/blob/master/samples/cpp/squares.cpp
In your case, you can skip the Canny step because you don't have gradients. You may need to modify the filtering, like side dimensions for your case.
Once the rectangles look good, you just need to copy your image in the location. If rectangles are rotated, you will have to rotate your image as well.
EDIT:
To copy the small image onto the big image you can do the following
Mat submatImg = bigImage.submat(new Rect(x, y, smallImage.width(), smallImage.height());
smallImage.copyTo(submatImg);
If you need to do resizing and rotations, take a look at geometric transformations
I'm having quite a bit of difficulty wrapping my head around the actual display side of things with libgdx. That is, it just seems fairly jumbled in terms of what needs to be done in order to actually put something up onto the screen. I guess my confusion can sort of be separated into two parts:
What exactly needs to be done in terms of creating an image? There's
Texture, TextureRegion, TextureAtlas, Sprite, Batch, and probably a
few other art related assets that I'm missing. How do these all
relate and tie into each other? What's the "production chain" among
these I guess would be a way of putting it.
In terms of putting
whatever is created from the stuff above onto the monitor or
display, how do the different coordinate and sizing measures relate
and translate to and from each other? Say there's some image X that
I want to put on the screen. IT's got it's own set of dimensions and
coordinates, but then there's also a viewport size (is there a
viewport position?) and a camera position (is there a camera size?).
On top of all that, there's also the overall dispaly size that's
from Gdx.graphics. A few examples of things I might want to do could
be as follow:
X is my "global map" that is bigger than my screen
size. I want to be able to scroll/pan across it. What are the
coordinates/positions I should use when displaying it?
Y is bigger
than my screen size. I want to scale it down and have it always be
in the center of the screen/display. What scaling factor do I use
here, and which coordinates/positions?
Z is smaller than my screen
size. I want to stick it in the upper left corner of my screen and
have it "stick" to the global map I mentioned earlier. Which
positioning system do I use?
Sorry if that was a bunch of stuff... I guess the tl;dr of that second part is just which set of positions/coordinates, sizes, and scales am I supposed to do everything in terms of?
I know this might be a lot to ask at once, and I also know that most of this stuff can be found online, but after sifting through tutorial after tutorial, I can't seem to get a straight answer as to how these things all relate to each other. Any help would be appreciated.
Texture is essentially the raw image data.
TextureRegion allows you to grab smaller areas from a larger texture. For example, it is common practice to pack all of the images for your game/app into a single large texture (the LibGDX “TexturePacker” is a separate program that does this) and then use regions of the larger texture for your individual graphics. This is done because switching textures is a heavy and slow operation and you want to minimize this process.
When you pack your images into a single large image with the TexturePacker it creates a “.atlas” file which stores the names and locations of your individual images. TextureAtlas allows you to load the .atlas file and then extract your original images to use in your program.
Sprite adds position and color capabilities to the texture. Notice that the Texture API has no methods for setting/getting position or color. Sprites will be your characters and other objects that you can actually move around and position on the screen.
Batch/SpriteBatch is an efficient way of drawing multiple sprites to the screen. Instead of making drawing calls for each sprite one at a time the Batch does multiple drawing calls at once.
And hopefully I’m not adding to the confusion, but another I option I really like is using the “Actor” and “Stage” classes over the “Sprite” and “SpriteBatch” classes. Actor is similar to Sprite but adds additional functionality for moving/animating, via the act method. The Stage replaces the SpriteBatch as it uses its own internal SpriteBatch so you do not need to use the SpriteBatch explicitly.
There is also an entire set of UI components (table, button, textfield, slider, progress bar, etc) which are all based off of Actor and work with the Stage.
I can’t really help with question 2. I stick to UI-based apps, so I don’t know the best practices for working with large game worlds. But hopefully someone more knowledgeable in that area can help you with that.
This was to long to reply as a comment so I’m responding as another answer...
I think both Sprite/SpriteBatch and Actor/Stage are equally powerful as you can still animate and move with Sprite/SpriteBatch, but Actor/Stage is easier to work with. The stage has two methods called “act” and “draw” which allows the stage to update and draw every actor it contains very easily. You override the act method for each of your actors to specify what kind of action you want it to do. Look up a few different tutorials for Stage/Actor with sample code and it should become clear how to use it.
Also, I was slightly incorrect before that “Actor” is equivalent to Sprite, because Sprite includes a texture, but Actor by itself does not have any kind of graphical component. There is an extension of Actor called “Image” that includes a Drawable, so the Image class is actually the equivalent to Sprite. Actor is the base class that provides the methods for acting (or “updating”), but it doesn’t have to be graphical. I've used Actors for other purposes such as triggering audio sounds at specific times.
Atlas creates the large Texture containing all of your png files and then allows you to get regions from it for individual png's. So the pipeline for getting a specific png graphic would be Atlas > Region > Sprite/Image. Both Image and Sprite classes have constructors that take a region.
Was just wondering how you would be able to check each pixel, from top of your screen to bottom (or just a 500x500 rectangle) for a pattern of pixels in it. Example look through all the pixels, and see if there is 20 red pixels in a row (Search the screen for a red box).
Sorry for the terrible description, let me know and ill try to make it more specific.
What you are trying to do is called Template Matching in the Image Processing community, you should first check this.
For the implementation part, you can access the pixels of BufferedImage (using getRGB/getRGBs methods), or even better you can use JAI or JavaCV if you want to do anything serious.
What is the best way to edit singular pixels of an texture multiple times per frame? I have tried using a couple ways to no avail. What is the most optimal way to do this? I have tried using Intermediate mode and drawing each quad, thought, this is really slow.
Edit:
I forgot to mention that I am doing an unusual "fog of war" system. This system doesn't let you see around walls but instead acts like a 2D ray traced shadow. I want these shadows to be pixelated as this is part of the style of the game. I am trying to find the best way to do a form of a shadow map that I can overlay over the world to show what you can see.
I recommend using a Pixel Buffer Object.
I'm developing Side Scroll 2D Game, using AndEngine
I'm using their SVG extension (I'm using vector graphic)
But I discovered strange and ugly effect, while moving my player (while camera is chasing player exactly, means while camera is changing its position)
Images of my sprites looks just different, they are like blurred or there is effect like those images would be moving (not changing their possition, just jittery effect, really hard to explain and call this effect properly) Hopefully this image may explain it a bit:
Its more or less, how does it look in the game, where:
a) "FIRST" image is showing square, while player is moving (CAMERA isn't) image looks as it should
b) "SECOND" the same image, but with this strange effect "which looks like image moving/blurring during camera moving [chasing player])
Friend of mine told me that it might be hardware problem:
"the blurring that you notice is actually a hardware problem. Some phones "smooth" the content on the screen to give a nicer feel to applications. I don't know if it's the screen or the graphics processor, but it doesn't occur on my wife's Samsung Captivate. It happens on my Atrix and Xoom though. It's really noticable on the Xoom due to the large screen size."
But seems there is way to prevent it, since I have tested many similar games, where camera is chasing player, and I could not notice such effect.
Is there a way to turn this off in code?
I'm grateful for previous answers, unfortunately, still problem exist.
Till now, I have tried:
casting (int) on setCenter method which is being executed on updateChaseEntity
testing game using PNG images, instead of SVG extension and vector graphic
different TextureOptions
hardwareAcceleration
If someone have different idea, what may cause this strange effect, I would be really grateful for help - thank you.
Some devices (Xperia Play) bleed everywhere when trying to draw things that are moving quickly. For example a red icon on the application list leaves a blur behind it. You could try hardwareAcceleration in the manifest (on and off) to see if it makes a difference.
You'd probably get the same effect if you weren't using svg
When your player's just going to the right and camera begins chasing him, all other sprites except player are moving to the left. Try to print the absolute coordinates of your "blurring" sprite (or some of its anchor points) to the log. The X-coord of sprite should be decreasing linearly. If you notice it's increasing some times, it could be a reason of blur.
Hope this will help.
It sounds like it's due to the camera moving in real increments, making the SVG components rest on non-integer bounds, and the SVG renderer making anti-aliasing come into effect to demonstrate this. Try moving the camera in integer increments by casting camera values to int.
I'm not familiar with this engine but I wonder why would you use vector graphics for pixelated art style. I'll be surprised if your character in the screenshot is really a vector art... maybe it's texture imported in SVG? I attempted back in the day to use flash a few times and I was making the same mistake... I'm not saying it's not possible but it's not intended to create pixel art with flash or any other vector software. There is a reason why most flash games have similar look.
Best way to debug it, is try a different looking sprite.
Maybe it is just the slow response time of your device display.
I'm also an Andengine developer, and never seen such behavior.
Sometimes you fix jittering using FixedStepEngine, it might help.
If you can post your code maybe we can better help you.