Transaction handling when wrapping Stream into Flux - java

I really have issues understanding what's going on behind the sences when manually wrapping Stream received as a query result from spring data jpa into a Flux.
Consider the following:
Entity:
#NoArgsConstructor
#AllArgsConstructor
#Data
#Entity
public class TestEntity {
#Id
private Integer a;
private Integer b;
}
Repository:
public interface TestEntityRepository extends JpaRepository<TestEntity, Integer> {
Stream<TestEntity> findByBBetween(int b1, int b2);
}
Simple test code:
#Test
#SneakyThrows
#Transactional
public void dbStreamToFluxTest() {
testEntityRepository.save(new TestEntity(2, 6));
testEntityRepository.save(new TestEntity(3, 8));
testEntityRepository.save(new TestEntity(4, 10));
testEntityFlux(testEntityStream()).subscribe(System.out::println);
testEntityFlux().subscribe(System.out::println);
Thread.sleep(200);
}
private Flux<TestEntity> testEntityFlux() {
return fromStream(this::testEntityStream);
}
private Flux<TestEntity> testEntityFlux(Stream<TestEntity> testEntityStream) {
return fromStream(() -> testEntityStream);
}
private Stream<TestEntity> testEntityStream() {
return testEntityRepository.findByBBetween(1, 9);
}
static <T> Flux<T> fromStream(final Supplier<Stream<? extends T>> streamSupplier) {
return Flux
.defer(() -> Flux.fromStream(streamSupplier))
.subscribeOn(Schedulers.elastic());
}
Questions:
Is this the correct way to do what I do, especially regarding the static fromStream method?
While the call to testEntityFlux(testEntityStream()) does what I expect, for reasons I really don't understand, the call to testEntityFlux() runs into an error:
reactor.core.Exceptions$ErrorCallbackNotImplemented: org.springframework.dao.InvalidDataAccessApiUsageException: You're trying to execute a streaming query method without a surrounding transaction that keeps the connection open so that the Stream can actually be consumed. Make sure the code consuming the stream uses #Transactional or any other way of declaring a (read-only) transaction.
Caused by: org.springframework.dao.InvalidDataAccessApiUsageException: You're trying to execute a streaming query method without a surrounding transaction that keeps the connection open so that the Stream can actually be consumed. Make sure the code consuming the stream uses #Transactional or any other way of declaring a (read-only) transaction.
... what usually happens when I forget the #Transactional, which I didn't.
EDIT
Note: The code was inspired by: https://github.com/chang-chao/spring-webflux-reactive-jdbc-sample/blob/master/src/main/java/me/changchao/spring/springwebfluxasyncjdbcsample/service/CityServiceImpl.java which in turn was inspired by https://spring.io/blog/2016/07/20/notes-on-reactive-programming-part-iii-a-simple-http-server-application.
However, the Mono version has the same "issue".
EDIT 2
An example using optional, note that in testEntityMono() replacing testEntityOptional() with testEntityOptionalManual() leads to working code. Thus it all seems to be directly related to how jpa does the data fetching:
#SneakyThrows
#Transactional
public void dbOptionalToMonoTest() {
testEntityRepository.save(new TestEntity(2, 6));
testEntityRepository.save(new TestEntity(3, 8));
testEntityRepository.save(new TestEntity(4, 10));
testEntityMono(testEntityOptional()).subscribe(System.out::println);
testEntityMono().subscribe(System.out::println);
Thread.sleep(1200);
}
private Mono<TestEntity> testEntityMono() {
return fromSingle(() -> testEntityOptional().get());
}
private Mono<TestEntity> testEntityMono(Optional<TestEntity> testEntity) {
return fromSingle(() -> testEntity.get());
}
private Optional<TestEntity> testEntityOptional() {
return testEntityRepository.findById(4);
}
#SneakyThrows
private Optional<TestEntity> testEntityOptionalManual() {
Thread.sleep(1000);
return Optional.of(new TestEntity(20, 20));
}
static <T> Mono<T> fromSingle(final Supplier<T> tSupplier) {
return Mono
.defer(() -> Mono.fromSupplier(tSupplier))
.subscribeOn(Schedulers.elastic());
}

TL;DR:
It boils down to the differences between imperative and reactive programming assumptions and Thread affinity.
Details
We first need to understand what happens with transaction management to understand why your arrangement ends with a failure.
Using a #Transactional method creates a transactional scope for all code within the method. Transactional methods returning scalar values, Stream, collection-like types, or void (basically non-reactive types) are considered imperative transactional methods.
In imperative programming, flows stick to their carrier Thread. The code is expected to remain on the same Thread and not to switch threads. Therefore, transaction management associates transactional state and resources with the carrier Thread in a ThreadLocal storage. As soon as code within a transactional method switches threads (e.g. spinning up a new Thread or using a Thread pool), the unit of work that gets executed on a different Thread leaves the transactional scope and potentially runs in its own transaction. In the worst case, the transaction is left open on an external Thread because there is no transaction manager monitoring entry/exit of the transactional unit of work.
#Transactional methods returning a reactive type (such as Mono or Flux) are subject to reactive transaction management. Reactive transaction management is different from imperative transaction management as the transactional state is attached to a Subscription, specifically the subscriber Context. The context is only available with reactive types, not with scalar types as there are no means to attach data to void or a String.
Looking at the code:
#Test
#Transactional
public void dbStreamToFluxTest() {
// …
}
we see that this method is a #Transactional test method. Here we have two things to consider:
The method returns void so it is subject to imperative transaction management associating the transactional state with a ThreadLocal.
There's no reactive transaction support for #Test methods because typically a Publisher is expected to be returned from the method, and by doing so, there would be no way to assert the outcome of the stream.
#Test
#Transactional
public Publisher<Object> thisDoesNotWork() {
return myRepository.findAll(); // Where did my assertions go?
}
Let's take a closer look at the fromStream(…) method:
static <T> Flux<T> fromStream(final Supplier<Stream<? extends T>> streamSupplier) {
return Flux
.defer(() -> Flux.fromStream(streamSupplier))
.subscribeOn(Schedulers.elastic());
}
The code accepts a Supplier that returns a Stream. Next, subscription (subscribe(…), request(…)) signals are instructed to happen on the elastic Scheduler which effectively switches on which Thread the Stream gets created and consumed. Therefore, subscribeOn causes the Stream creation (call to findByBBetween(…)) to happen on a different Thread than your carrier Thread.
Removing subscribeOn(…) will fix your issue.
There is a bit more to explain why you want to refrain from using reactive types with JPA. Reactive programming has no strong Thread affinity. Thread switching may occur at any time. Depending on how you use the resulting Flux and how you have designed your entities, you might experience visibility issues as entities are passed across threads. Ideally, data in a reactive context remains immutable. Such an approach does not always comply with JPA rules.
Another aspect is lazy loading. By using JPA entities from threads other than the carrier Thread, the entity may not be able to correlate its context back to the JPA Transaction. You can easily run into LazyInitializationException without being aware of why this is as Thread switching can be opaque to you.
The recommendation is: Do not use reactive types with JPA or any other transactional resources. Stay with Java 8 Stream instead.

The Stream returned by the repository is lazy. It uses the connection to the database in order to get the rows when the stream is being consumed by a terminal operation.
The connection is bound to the current transaction, and the current transaction is stored in a ThreadLocal variable, i.e. is bound to the thread that is eecuting your test method.
But the consumption of the stream is done on a separate thread, belonging to the thread pool used by the elastic scheduler of Reactor. So you create the lazy stream on the main thread, which has the transaction bound to it, but you consume the stream on a separate thread, which doesn't have the transaction bound to it.
Don't use reactor with JPA transactions and entities. They're incompatible.

Related

Persist objects in db using reactive programming and JPA respository

I am using web flux in my project and am trying to do simple CRUD operations using JPA repository. However I' unable to persist the object in the DB. The object is always being persisted with null values instead in the DB. Please help. Thanks in advance.
Here is my pojo:
#Entity
#Table(name="tbl_student")
#Data
public class Student {
#Id
#GeneratedValue(strategy= GenerationType.SEQUENCE,generator = "student_seq")
#SequenceGenerator(name="student_seq",allocationSize = 1)
#Column(name="id",insertable =false,nullable = false,updatable = false)
private Long id;
#Column(name="name")
private String name;
#Column(name="school")
private String school;
}
My Repo:
public interface StudentRepository extends JpaRepository<Student,Long> {
}
My controller:
#RestController
#RequiredArgsConstructor
#Slf4j
public class StudentApiControllerImpl extends StudentApiController {
private final StudentRepository StudentRepository;
private final ModelMapper modelMapper;
public Mono<Void> addStudent(#Valid #RequestBody(required = false) Mono<StudentDetails> studentDetails, ServerWebExchange exchange) {
StudentRepository.save(modelMapper.map(studentDetails, StudentDTO.class));
return Mono.empty();
}
public Flux<StudentDetails> getStudent(ServerWebExchange exchange) {
return Flux.fromStream(StudentRepository.findAll().stream().map(v ->
modelMapper.map(v,LoginCredential.class))).subscribeOn(Schedulers.boundedElastic());
}
}
You are breaking the reactive chain, thats why nothing happens.
Reactive programming is a lot different than standard java programming so you can't just do what you have done before and think that it will work the same.
In reactive programming one of the most important things to understand is nothing happens until you subscribe
A reactive chain is build by a producer and a subscriber which means someone produces (your application) and someone subscribes (the calling client/application). A Flux/Mono is a producer and nothing will happen until someone subscribes.
// Nothing happens, this is just a declaration
Mono.just("Foobar");
But when we subscribe:
// This will print FooBar
Mono.just("Foobar").subscribe(s -> System.out.println(s));
So if we look at your code, especially this line
// This is just a declaration, no one is subscribing, so nothing will happen
StudentRepository.save(modelMapper.map(studentDetails, StudentDTO.class));
A common misconception is that people will solve this by just subscribing.
// This is in most cases wrong
StudentRepository.save(modelMapper.map(studentDetails, StudentDTO.class)).subscribe();
Because the subscriber is the calling client, the one that initiated the call. Doing as such might lead to very bad performance under heavier loads. Instead what you do is that you need to return the Mono out to the client, so that the client can subscribe.
public Mono<Void> addStudent(#Valid #RequestBody(required = false) Mono<StudentDetails> studentDetails, ServerWebExchange exchange) {
return StudentRepository.save(modelMapper.map(studentDetails, StudentDTO.class))
.then();
}
I am using Mono#then here to throw away the return value and just return a void value to the calling client.
But as you can see, you need to think about it like callbacks, you need to always return so it will build a chain that gets returned to the calling client, so that the client can subscribe.
I highly suggest you read the reactive documentation so you understand the core concepts before starting out with reactive programming.
Reactive programming getting started
Also, another thing, you can not use JpaRepository in a reactive world because it is using a blocking database driver. Which means basically it is not written to work well with Webflux and will give very poor performance. If you want to use a database connection i suggest you look into using R2DBC by spring here is a tutuorial to get it up and running R2DBC getting started
Update:
i see now that you have placed the entire call on its own scheduler which is good since you are using JPA and that will result in less bad performance. But i do recommend still, if possible, to look into using R2DBC for a fully reactive application.
Also, if you insist on using JPA with a blocking database driver, you need to perform your call in a reactive context, since it will return a concrete value. So for instance:
return Mono.fromCallable(() -> StudentRepository.save(modelMapper.map(studentDetails, StudentDTO.class)))
.then()
Which means we are basically executing our function and immediately placing the returned type into a Mono, and then as the above example using Mono#then to discard the return value and just signal that the execution has completed.

blocking EntityManager operations

I don't want to perform a blocking operation.
Caused by: java.lang.IllegalStateException: You have attempted to perform a blocking operation on a IO thread. This is not allowed, as blocking the IO thread will cause major performance issues with your application. If you want to perform blocking EntityManager operations make sure you are doing it from a worker thread.
Anyone know how to fix this problem?
I only have simple operations. a single findAll request that returns 10 rows. I put Tansactional NEVER
and I still have the problem.
I'm using panache with a simple entity.
#GET
#Path("/type")
#Produces(MediaType.APPLICATION_JSON)
#Transactional(Transactional.TxType.NEVER)
public Response get() {
return AlertType.listAll();
}
public class AlerteType extends PanacheEntityBase
{
#Column(name="ATY_ACTIVE")
private String active;
#Column(name="ATY_ID")
#Id
private Long oraId;
#Column(name="ATY_TYPE")
private String type;
}
thank
If you want to keep using non-reactive code, you can use #Blocking annotation on the method get(). It will offload the computation on a worker thread (instead of one IO thread).
Quarkus is really picky with IO thread, you cannot block them. And if you have something like a database call (or any remote call), that is blocking. So you cannot do it in an IO thread.
More info:
https://quarkus.io/guides/getting-started-reactive
https://quarkus.io/blog/resteasy-reactive-faq/
"Controller" methods (request / route / path handlers, or whatever you call it) is executed on IO thread and not supposed to do any time consuming tasks such as database querying.
If you're not using reactive database client, try wrap them in side a "Service" class.
#ApplicationScoped
public class AlertService {
private final AlertType alertType;
#Inject
public AlertService(AlertType alertType) {
this.alertType = alertType;
}
public List<Alert> listAll() {
return this.alertType.listAll();
}
}
thank you but I already had the call in a service.
I found a solution with mutiny
#GET
#Path("type")
#Produces(MediaType.APPLICATION_JSON)
public Uni<Response> get() {
return Uni.createFrom().item(alertTypeService.findAll().get())
.onItem().transform(data -> Response.ok(data))
.onFailure().recoverWithItem(err -> Response.status(600, err.getMessage()))
.onItem().transform(ResponseBuilder::build)
.emitOn(Infrastructure.getDefaultExecutor())
}
Where alertTypeService.findAll() return a supplier
#Transactional(Transactional.TxType.NEVER)
public Supplier<Set<AlerteTypeDTO>> findAll() {
return () -> alerteTypeDAO.streamAll()
.map(AlertTypeDTOMapper::mapToDTO)
.collect(Collectors.toSet());
}
I don't know if this is the right solution
but it works.
This way the service provides a supplier which will be invoked by the correct thread.
At least that's how I understood it.

JPA correct way to handle detached entity state in case of exceptions/rollback

I have this class and I tought three ways to handle detached entity state in case of persistence exceptions (which are handled elsewhere):
#ManagedBean
#ViewScoped
public class EntityBean implements Serializable
{
#EJB
private PersistenceService service;
private Document entity;
public void update()
{
// HANDLING 1. ignore errors
service.transact(em ->
{
entity = em.merge(entity);
// some other code that modifies [entity] properties:
// entity.setCode(...);
// entity.setResposible(...);
// entity.setSecurityLevel(...);
}); // an exception may be thrown on method return (rollback),
// but [entity] has already been reassigned with a "dirty" one.
//------------------------------------------------------------------
// HANDLING 2. ensure entity is untouched before flush is ok
service.transact(em ->
{
Document managed = em.merge(entity);
// some other code that modifies [managed] properties:
// managed.setCode(...);
// managed.setResposible(...);
// managed.setSecurityLevel(...);
em.flush(); // an exception may be thrown here (rollback)
// forcing method exit without [entity] being reassigned.
entity = managed;
}); // an exception may be thrown on method return (rollback),
// but [entity] has already been reassigned with a "dirty" one.
//------------------------------------------------------------------
// HANDLING 3. ensure entity is untouched before whole transaction is ok
AtomicReference<Document> reference = new AtomicReference<>();
service.transact(em ->
{
Document managed = em.merge(entity);
// some other code that modifies [managed] properties:
// managed.setCode(...);
// managed.setResposible(...);
// managed.setSecurityLevel(...);
reference.set(managed);
}); // an exception may be thrown on method return (rollback),
// and [entity] is safe, it's not been reassigned yet.
entity = reference.get();
}
...
}
PersistenceService#transact(Consumer<EntityManager> consumer) can throw unchecked exceptions.
The goal is to maintain the state of the entity aligned with the state of the database, even in case of exceptions (prevent entity to become "dirty" after transaction fail).
Method 1. is obviously naive and doesn't guarantee coherence.
Method 2. asserts that nothing can go wrong after flushing.
Method 3. prevents the new entity assigment if there's an exception in the whole transaction
Questions:
Is method 3. really safer than method 2.?
Are there cases where an exception is thrown between flush [excluded] and commit [included]?
Is there a standard way to handle this common problem?
Thank you
Note that I'm already able to rollback the transaction and close the EntityManager (PersistenceService#transact will do it gracefully), but I need to solve database state and the business objects do get out of sync. Usually this is not a problem. In my case this is the problem, because exceptions are usually generated by BeanValidator (those on JPA side, not on JSF side, for computed values that depends on user inputs) and I want the user to input correct values and try again, without losing the values he entered before.
Side note: I'm using Hibernate 5.2.1
this is the PersistenceService (CMT)
#Stateless
#Local
public class PersistenceService implements Serializable
{
#PersistenceContext
private EntityManager em;
#TransactionAttribute(TransactionAttributeType.REQUIRED)
public void transact(Consumer<EntityManager> consumer)
{
consumer.accept(em);
}
}
#DraganBozanovic
That's it! Great explanation for point 1. and 2.
I'd just love you to elaborate a little more on point 3. and give me some advice on real-world use case.
However, I would definitely not use AtomicReference or similar cumbersome constructs. Java EE, Spring and other frameworks and application containers support declaring transactional methods via annotations: Simply use the result returned from a transactional method.
When you have to modify a single entity, the transactional method would just take the detached entity as parameter and return the updated entity, easy.
public Document updateDocument(Document doc)
{
Document managed = em.merge(doc);
// managed.setXxx(...);
// managed.setYyy(...);
return managed;
}
But when you need to modify more than one in a single transaction, the method can become a real pain:
public LinkTicketResult linkTicket(Node node, Ticket ticket)
{
LinkTicketResult result = new LinkTicketResult();
Node managedNode = em.merge(node);
result.setNode(managedNode);
// modify managedNode
Ticket managedTicket = em.merge(ticket);
result.setTicket(managedTicket);
// modify managedTicket
Remark managedRemark = createRemark(...);
result.setRemark(managedemark);
return result;
}
In this case, my pain:
I have to create a dedicated transactional method (maybe a dedicated #EJB too)
That method will be called only once (will have just one caller) - is a "one-shot" non-reusable public method. Ugly.
I have to create the dummy class LinkTicketResult
That class will be instantiated only once, in that method - is "one-shot"
The method could have many parameters (or another dummy class LinkTicketParameters)
JSF controller actions, in most cases, will just call a EJB method, extract updated entities from returned container and reassign them to local fields
My code will be steadily polluted with "one-shotters", too many for my taste.
Probably I'm not seeing something big that's just in front of me, I'll be very grateful if you can point me in the right direction.
Is method 3. really safer than method 2.?
Yes. Not only is it safer (see point 2), but it is conceptually more correct, as you change transaction-dependent state only when you proved that the related transaction has succeeded.
Are there cases where an exception is thrown between flush [excluded] and commit [included]?
Yes. For example:
LockMode.OPTIMISTIC:
Optimistically assume that transaction will not experience contention
for entities. The entity version will be verified near the transaction
end.
It would be neither performant nor practically useful to check optimistick lock violation during each flush operation within a single transaction.
Deferred integrity constraints (enforced at commit time in db). Not used often, but are an illustrative example for this case.
Later maintenance and refactoring. You or somebody else may later introduce additional changes after the last explicit call to flush.
Is there a standard way to handle this common problem?
Yes, I would say that your third approach is the standard one: Use the results of a complete and successful transaction.
However, I would definitely not use AtomicReference or similar cumbersome constructs. Java EE, Spring and other frameworks and application containers support declaring transactional methods via annotations: Simply use the result returned from a transactional method.
Not sure if this is entirely to the point, but there is only one way to recover after exceptions: rollback and close the EM. From https://docs.jboss.org/hibernate/entitymanager/3.6/reference/en/html/transactions.html#transactions-basics-issues
An exception thrown by the Entity Manager means you have to rollback
your database transaction and close the EntityManager immediately
(discussed later in more detail). If your EntityManager is bound to
the application, you have to stop the application. Rolling back the
database transaction doesn't put your business objects back into the
state they were at the start of the transaction. This means the
database state and the business objects do get out of sync. Usually
this is not a problem, because exceptions are not recoverable and you
have to start over your unit of work after rollback anyway.
-- EDIT--
Also see http://piotrnowicki.com/2013/03/jpa-and-cmt-why-catching-persistence-exception-is-not-enough/
ps: downvote is not mine.

How to test LazyInitializationExceptions?

I have some code which (in production):
In one thread, primes a cache with data from the db
In another thread, grabs the data from the cache, and starts iterating it's properties.
This threw a LazyInitializationException.
While I know how to fix the problem, I want to get a test around this. However I can't figure out how to recreate the exception in the correct part of the test.
I have to prime the DB with some test data, therefore my test is annotated with #Transactional. Failing to do so causes the set-up to fail with... you guessed it... LazyInitializationException.
Here's my current test:
#Transactional
public class UpdateCachedMarketPricingActionTest extends AbstractIntegrationTest {
#Autowired
private UpdateCachedMarketPricingAction action;
#Autowired
private PopulateMarketCachesTask populateMarketCachesTask;
#Test #SneakyThrows
public void updatesCachedValues()
{
// Populate the cache from a different thread, as this is how it happens in real life
Thread updater = new Thread(new Runnable() {
#Override
public void run() {
populateMarketCachesTask.populateCaches();
}
});
updater.start();
updater.join();
updateMessage = {...} //ommitted
action.processInstrumentUpdate(updateMessage);
}
So, I'm priming my cache in a separate thread, to try to get it outside of the current #Transaction scope. Additionally, I'm also calling entityManager.detatch(entity) inside the cache primer, to try to ensure that the entities that exist within the cache can't lazy-load their collections.
However, the test passes... no exception is thrown.
How can I forcibly get an entity to a state that when I next try to iterate it's collections, it will throw the LazyInitializationException?
You need to ensure that the transactions for each operation are committed, independent of each other. Annotating your test method or test class with #Tranactional leaves the current test transaction open and then rolls it back after execution of the entire test.
So one option is to do something like the following:
#Autowired
private PlatformTransactionManager transactionManager;
#Test
public void example() {
new TransactionTemplate(transactionManager).execute(new TransactionCallbackWithoutResult() {
#Override
protected void doInTransactionWithoutResult(TransactionStatus status) {
// add your code here...
}
});
}
You could invoke your first operation in its own callback, and then invoke the second operation in a different callback. Then, when you access Hibernate or JPA entities after the callbacks, the entities will no longer be attached to the current unit of work (e.g., Hibernate Session). Consequently, accessing a lazy collection or field at that point would result in a LazyInitializationException.
Regards,
Sam
p.s. please note that this technique will naturally leave changes committed to your database. So if you need to clean up that modified state, consider doing so manually in an #AfterTransaction method.

REQUIRES_NEW within REQUIRES_NEW within REQUIRES_NEW ... on and on

JBoss 4.x
EJB 3.0
I've seen code like the following (greatly abbreviated):
#Stateless
#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
public class EJB1 implements IEJB1
{
#EJB
private IEJB1 self;
#EJB
private IEJB2 ejb2;
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public boolean someMethod1()
{
return someMethod2();
}
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public boolean someMethod2()
{
return self.someMethod3();
}
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public boolean someMethod3()
{
return ejb2.someMethod1();
}
}
And say EJB2 is almost an exact copy of EJB1 (same three methods), and EJB2.someMethod3() calls into EJB3.someMethod1(), which then finally in EJB3.someMethod3() writes to the DB.
This is a contrived example, but have seen similar code to the above in our codebase. The code actually works just fine.
However, it feels like terrible practice and I'm concerned about the #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW) on every method that doesn't even actually perform any DB writes. Does this actually create a new transaction every single time for every method call with the result of:
new transaction
-new transaction
--new transaction
---new transaction
...(many more)
-------new transaciton (DB write)
And then unwraps at that point? Would this ever be a cause for performance concern? Additional thoughts?
Does this actually create a new transaction every single time for
every method call
No, it doesn't. The new transaction will be created only when calling method by EJB reference from another bean. Invoking method2 from method1 within the same bean won't spawn the new transaction.
See also here and here. The latter is exceptionally good article, explaining transaction management in EJB.
Edit:
Thanks #korifey for pointing out, that method2 actually calls method3 on bean reference, thus resulting in a new transaction.
It really creates new JTA transaction in every EJB and this must do a serious performance effect to read-only methods (which makes only SELECTS, not updates). Use #SUPPORTS for read-only methods

Categories

Resources