The learning, at least in models like Dall-E and Stable Diffusion is "real". It has no way of "saving" the images in some database to recall parts of them later. It learns in much the same way a human brain does, by seeing information it is exposed to and replicating those techniques and styles based on a description.
If I asked you to produce me a piece of art in the "glitch art" style, you, if you haven't been exposed to that style before; would be required to see references of what that kind of art looks like; to be able to reproduce it. Either consciously or subconsciously, you will have been exposed to a particular artist's interpretation of the colors and composition that make up that style. Without other examples of "glitch art", you might be inclined to reproduce what you know. AI has this same problem.
The AI has no way of saving or taking in pieces of art. It memorizes patterns, strokes, and paths from all art it views. If I tell the AI to produce glitch art, it must take its knowledge of what a "glitch" is, what "art" is, the few examples "glitch art" it has seen; and what it means for something to be "glitchy" or have "glitch like properties".
It just so happens to associate these things with other things it knows, like bugs, computer hacking, cyberpunk color schemes, and the like. When other people's art "appears" within the work generated by an AI, it is due to a problem called overfitting.
An example, would be if I had put you into a single room from birth. There are no lights, you are fed through your veins, and no other objects of form exist.
The only illuminated object in the room is a picture of the Mona Lisa. I tell you this is what art is, and to study it greatly. I remove the painting from the room. Then tell you to paint on canvas with the brushes and paint beside you to replicate "art" for me. You will either likely paint blank white walls, a light, yourself, or the Mona Lisa. The variation will depend on your skill and the ability to memorize what you see.
Even though you can't see the painting, you've seen the painting before, and have gotten your concept of what "art" means from that one example. You have been overfitted to the Mona Lisa during your training.
To combat overfitting, to prevent you from duplicating one image; you need many, many examples of art to composite art from.
Particularly creative people can imagine what is beyond the blank room with the Mona Lisa. An AI's "creativity" is simulated by a random input image of noise. Even being trained on those same few things, the AI will occasionally draw a person in front of a white wall poorly illuminated, a white wall in place of a person with Mona Lisa's background... so on and so forth.
Individually, one can argue that those are original compositions of art, even though the Mona Lisa, or parts of her are appearing in all of them. Likewise, parts of other people’s art art showing up in works generated by Dall-E and Stable Diffusion. People regard this as “art theft” only because it is a machine doing it. Any other people taking another piece of art work from the internet, and transforming it into say… a grid of Rubik’s Cubes; isn’t doing anything more transformative than the AI is.