IntelliJ Idea: configure build for stub - java

Suppose, we don't have access to some needed server while debugging program on our machine. So we need a stub. But then every time i want make build without stub i need to comment my stub code and uncomment actual code, so it looks dirty. It would be nice if i can configure builds somehow to avoid this commenting/uncommenting. I haven't figured out any good decision.
For example, some code
public class SingeFormatServiceClient {
public static final QName SERVICE_NAME = new QName("http://creditregistry.ru/2010/webservice/SingleFormatService", "SingleFormatService");
public SingleFormatService Connect(){
URL wsdlURL = SingleFormatService_Service.WSDL_LOCATION;
SingleFormatService_Service ss = new SingleFormatService_Service(wsdlURL, SERVICE_NAME);
return ss.getSingleFormatServiceHttpPort();
}
public SingleFormatService Connect(){
return new SingleFormatServiceStub();
}
}
So first function is actual, second is stub. May be there is a way not to comment, but just saying to builder that now i want make build with first function, and now - with second?
Thank you.

Use System.getProperty() to instantiate an implementation. For example:
SingleFormatService service = (SingleFormatService) Class.forName(
System.getProperty("single_format_service_class",
"your.comp.SingleFormatServiceStub")).getConstructor().newInstance();
Your implementation must provide non-arg constructor. In your jvm argument, specify the working class, i.e.
-Dsingle_format_service_class=your.comp.SingleFormatServiceActual
In intellij idea, you could specify several run configurations using different jvm args.
NB. IMO many libraries use that way. Hibernate uses hibernate.cache.provider_class to choose which cache provider implementation to use.

Related

JUnit - How to unit test method that reads files in a directory and uses external libraries

I have this method that I am using in a NetBeans plugin:
public static SourceCodeFile getCurrentlyOpenedFile() {
MainProjectManager mainProjectManager = new MainProjectManager();
Project openedProject = mainProjectManager.getMainProject();
/* Get Java file currently displaying in the IDE if there is an opened project */
if (openedProject != null) {
TopComponent activeTC = TopComponent.getRegistry().getActivated();
DataObject dataLookup = activeTC.getLookup().lookup(DataObject.class);
File file = FileUtil.toFile(dataLookup.getPrimaryFile()); // Currently opened file
// Check if the opened file is a Java file
if (FilenameUtils.getExtension(file.getAbsoluteFile().getAbsolutePath()).equalsIgnoreCase("java")) {
return new SourceCodeFile(file);
} else {
return null;
}
} else {
return null;
}
}
Basically, using NetBeans API, it detects the file currently opened by the user in the IDE. Then, it loads it and creates a SourceCodeFile object out of it.
Now I want to unit test this method using JUnit. The problem is that I don't know how to test it.
Since it doesn't receive any argument as parameter, I can't test how it behaves given wrong arguments. I also thought about trying to manipulate openedProject in order to test the method behaviour given some different values to that object, but as far as I'm concernet, I can't manipulate a variable in JUnit that way. I also cannot check what the method returns, because the unit test will always return null, since it doesn't detect any opened file in NetBeans.
So, my question is: how can I approach the unit testing of this method?
Well, your method does take parameters, "between the lines":
MainProjectManager mainProjectManager = new MainProjectManager();
Project openedProject = mainProjectManager.getMainProject();
basically fetches the object to work on.
So the first step would be to change that method signature, to:
public static SourceCodeFile getCurrentlyOpenedFile(Project project) {
...
Of course, that object isn't used, except for that null check. So the next level would be to have a distinct method like
SourceCodeFile lookup(DataObject dataLookup) {
In other words: your real problem is that you wrote hard-to-test code. The "default" answer is: you have to change your production code, to make easier to test.
For example by ripping it apart, and putting all the different aspects into smaller helper methods.
You see, that last method lookup(), that one takes a parameter, and now it becomes (somehow) possible to think up test cases for this. Probably you will have to use a mocking framework such as Mockito to pass mocked instances of that DataObject class within your test code.
Long story short: there are no detours here. You can't test your code (in reasonable ways) as it is currently structured. Re-structure your production code, then all your ideas about "when I pass X, then Y should happen" can work out.
Disclaimer: yes, theoretically, you could test the above code, by heavily relying on frameworks like PowerMock(ito) or JMockit. These frameworks allow you to contol (mock) calls to static methods, or to new(). So they would give you full control over everything in your method. But that would basically force your tests to know everything that is going on in the method under test. Which is a really bad thing.

How to set up a configured embedder for use of meta filters (-skip) with Serenity, JBehave and Selenium

While creating new scenarios I only want to test the scenario I am currently working with. For this purpose I want to use the Meta: #skip tag before my scenarios. As I found out I have to use the embedder to configure the used meta tags, so I tried:
configuredEmbedder().useMetaFilters(Arrays.asList("-skip"));
but actually this still has no effect on my test scenarios. I used it in the constructor of my SerenityStories test suite definition. Here is the complete code of this class:
public class AcceptanceTestSuite extends SerenityStories {
#Managed
WebDriver driver;
public AcceptanceTestSuite() {
System.setProperty("webdriver.chrome.driver", "D:/files/chromedriver/chromedriver.exe");
System.setProperty("chrome.switches", "--lang=en");
System.setProperty("restart.browser.each.scenario", "true");
configuredEmbedder().useMetaFilters(Arrays.asList("-skip"));
runSerenity().withDriver("chrome");
}
#Override
public Configuration configuration() {
Configuration configuration = super.configuration();
Keywords keywords = new LocalizedKeywords(DEFAULTSTORYLANGUAGE);
Properties properties = configuration.storyReporterBuilder().viewResources();
properties.setProperty("encoding", "UTF-8");
configuration.useKeywords(keywords)
.useStoryParser(new RegexStoryParser(keywords, new ExamplesTableFactory(new LoadFromClasspath(this.getClass()))))
.useStoryLoader(new UTF8StoryLoader()).useStepCollector(new MarkUnmatchedStepsAsPending(keywords))
.useDefaultStoryReporter(new ConsoleOutput(keywords)).storyReporterBuilder().withKeywords(keywords).withViewResources(properties);
return configuration;
}
}
Is this the wrong place or have I missed something? Still all scenarios are executed.
EDIT:
I changed following classes and now I think that it "works"
public AcceptanceTestSuite() {
System.setProperty("webdriver.chrome.driver", "D:/files/chromedriver/chromedriver.exe");
System.setProperty("chrome.switches", "--lang=de");
System.setProperty("restart.browser.each.scenario", "true");
this.useEmbedder(configuredEmbedder());
runSerenity().withDriver("chrome");
}
#Override
public Embedder configuredEmbedder() {
final Embedder embedder = new Embedder();
embedder.embedderControls()
.useThreads(1)
.doGenerateViewAfterStories(true)
.doIgnoreFailureInStories(false)
.doIgnoreFailureInView(false)
.doVerboseFailures(true);
final Configuration configuration = configuration();
embedder.useConfiguration(configuration);
embedder.useStepsFactory(stepsFactory());
embedder.useMetaFilters(Arrays.asList("-skip"));
return embedder;
}
But now I get the message [pool-1-thread-1] INFO net.serenitybdd.core.Serenity - TEST IGNORED but the scenario is still executed. Only in the result page I get the info that this scenario is ignored (but still executed). Is there a way to SKIP the scenario so it won't run?
I could not make it run with using configuredEmbedder() but by adding -Dmetafilter="+working -finished" as goals in my mvn run configurations and using the tags #working for scenarios I'm working with and which I want to run and #finsihed for scenarios I don't want to execute. Still I have to change the run configuration if I want to change the meta tags so it is not very comfortable but still I get what I was looking for.
As long as you document it well (some doc in https://github.com/serenity-bdd/the-serenity-book would be brilliant), I think as a JBehave/Serenity user you are well enough placed to decide which option makes the most sense.
Investigation
I debugged the serenity-jbehave classes, trying to understand why setting
configuredEmbedder().useMetaFilters(Collections.singletonList("-skip"))
is not working in all the possible places I put it within my class extending the SerenityStories, I found the strategic code place where metaFilters in ExtendedEmbedder#embedder are overwritten with what we define in our class into settings from serenity-jbehave.
This method is SerenityReportingRunner#createPerformableTree:
private PerformableTree createPerformableTree(List<CandidateSteps> candidateSteps, List<String> storyPaths) {
ExtendedEmbedder configuredEmbedder = this.getConfiguredEmbedder();
configuredEmbedder.useMetaFilters(getMetaFilters());
BatchFailures failures = new BatchFailures(configuredEmbedder.embedderControls().verboseFailures());
PerformableTree performableTree = configuredEmbedder.performableTree();
RunContext context = performableTree.newRunContext(getConfiguration(), candidateSteps,
configuredEmbedder.embedderMonitor(), configuredEmbedder.metaFilter(), failures);
performableTree.addStories(context, configuredEmbedder.storyManager().storiesOfPaths(storyPaths));
return performableTree;
}
This line changes the set metaFilters:
configuredEmbedder.useMetaFilters(getMetaFilters());
It overrides the current metaFilters value.
Going further the call chain, we get to the logic that defines from where it gets metaFilters, i.e. where we can actually set it.
SerenityReportingRunner#createPerformableTree
↓
SerenityReportingRunner#getMetaFilters
↓
SerenityReportingRunner#getMetafilterSetting
This is the method we need!
private String getMetafilterSetting() {
Optional<String> environmentMetafilters = getEnvironmentMetafilters();
Optional<String> annotatedMetafilters = getAnnotatedMetafilters(testClass);
Optional<String> thucAnnotatedMetafilters = getThucAnnotatedMetafilters(testClass);
return environmentMetafilters.orElse(annotatedMetafilters.orElse(thucAnnotatedMetafilters.orElse("")));
}
As we see here, the metaFilters can be defined in three places, and they override each other. In the priority lowering order, they are:
Value of metafilter (exactly all lowercase!) VM property.
Value of on net.serenitybdd.jbehave.annotations.Metafilter annotation on our SerenityStories class.
Value of on net.thucydides.jbehave.annotations.Metafilter annotation on our SerenityStories class. This annotation is deprecated, but left in place for backwards-compatibility.
Solution that is working with the current serenity-jbehave version
I've tried/debugged all these three options, they work and override each other as described above.
1. Use environment metafilter property
Added this to my JVM run arguments:
-Dmetafilter=skip
2. Use the modern #Metafilter annotation
import net.serenitybdd.jbehave.SerenityStories;
import net.serenitybdd.jbehave.annotations.Metafilter;
#Metafilter("-skip")
public class Acceptance extends SerenityStories {
3. Use the deprecated #Metafilter annotation
import net.serenitybdd.jbehave.SerenityStories;
import net.thucydides.jbehave.annotations.Metafilter;
#Metafilter("-skip") // warned as deprecated
public class Acceptance extends SerenityStories {
Solution for my current project is to use the current #Metafilter("-skip") annotation on my test class, to not depend on/have to change VM properties of the particular Jenkins/local dev execution.
Possible pull request to make
https://github.com/serenity-bdd/serenity-core/issues/95 — here Serenity guys have suggested me to do a PR with this fix, since they are not concentrated on Serenity + JBehave now.
I understand where to make the changes (in the code chain described above), but I don't know what overriding logic should be:
— MetaFilters from configuredEmbedder override any of ENV/annotation MetaFilters.
OR
— Any ENV/annotation MetaFilters override Metafilters from configuredEmbedder
OR
— MetaFilters from configuredEmbedder are merged with ENV/annotation MetaFilters. This option required merging priority.
Any suggestions?
In any type of fix, I would prefer add the explicit logs about how the overriding is now working into SerenityReportingRunner#getMetafilterSetting, since the current behaviour is really non-obvious and took lots of time to investigate.

JMockit - "Missing invocation to mocked type" when mocking System.getProperties()

I am admittedly new to JMockit, but I am for some reason having trouble mocking System.getProperties(). Thanks to the help of following post:
https://stackoverflow.com/questions/25664270/how-can-i-partially-mock-the-system-class-with-jmockit-1-8?lq=1
I can successfully mock System.getProperty() using JMockit 1.12:
#Test
public void testAddSystemProperty_String() {
final String propertyName = "foo";
final String propertyValue = "bar";
new Expectations(System.class) {{
System.getProperty(propertyName);
returns(propertyValue);
}};
assertEquals(propertyValue, System.getProperty(propertyName));
}
But the eerily similar code for mocking getProperties() barfs:
#Test
public void testAddSystemProperty_String() {
final String propertyName = "foo";
final String propertyValue = "bar";
final Properties properties = new Properties();
properties.setProperty(propertyName, propertyValue);
new Expectations(System.class) {{
System.getProperties();
returns(properties);
}};
assertEquals(1, System.getProperties().size());
}
I get the following exception that points to the "returns" method:
Missing invocation to mocked type at this point;
please make sure such invocations appear only after
the declaration of a suitable mock field or parameter
Also, how do I mock both methods at the same time? If I put them in the same Expectations block (with the getProperty() first), then I do not see the exception, but System.getProperties() returns the real system properties, not the mocked ones; though getProperty(propertyName) returns the mocked value. I find that totally wonky behavior.
I see from this post that certain methods cannot be mocked, but System.getProperties() is not on that list:
JMockit NullPointerException on Exceptions block?
I am also finding that a lot of solutions on SO that worked with JMockit 2-3 years ago are totally non-compilable now, so apparently things change a lot.
System.getProperties() is indeed one of the methods excluded from mocking in JMockit 1.12. The exact set of such excluded methods can change in newer versions, as new problematic JRE methods are found.
There is no need to mock System.getProperty(...) or System.getProperties(), though. The System class provides setProperty and setProperties methods which can be used instead.
Hopefully someone else finds this useful:
This can be solved with Mockito / PowerMock (1.5.3).
Note that I am testing a utility that will exhaustively try to find the property value given a list of possible sources. A source could be a system property, an environment variable, a meta-inf services file, a jndi name, a thread local, a file on disk, an ldap, basically anything that can perform a lookup.
#RunWith(PowerMockRunner.class)
#PrepareForTest({ClassThatDirectlyCallsSystemInCaseItIsNestedLikeInMyCase.class})
public class ConfigPropertyBuilderTest {
#Test
public void testAddSystemProperty_String_using_PowerMockito() {
PowerMockito.mockStatic(System.class);
PowerMockito.when(System.getProperty(propertyName)).thenReturn(propertyValue);
PowerMockito.when(System.getProperties()).thenReturn(new Properties() {{
setProperty(propertyName, propertyValue);
}});
// Here is realistic case of calling something that eventually calls System
// new configBuilder().addEnvironmentVariable(propertyName)
// .addSystemProperty(propertyName)
// .getValue();
// Here is simplified case:
assertEquals(1, System.getProperties().size());
assertEquals(propertyValue, System.getProperty(propertyName));
}
}
I could call System.setProperty(), but when you start getting into the other sources, it becomes less clear.
Note that I do not specifically care about the value returned by System.getProperty() either; I simply want to ensure that it is called if the first look up fails.
For example, in the above code snippet, the environment variable does not exist, so System.getProperty() should be called. If the environment variable existed (as it does in the next test case which is not shown), then I want to verify that System.getProperty() was not called because it should have short circuited.
Because of the difficulties in faking out the other sources using real files, real ldap, real APIs, etc, and because I want to verify certain APIs are either called or not called, and because I want to keep the tests looking consistent, I think mocking is the correct methodology (even though I may be trying to mock stuff that is not recommended in order to keep it all looking consistent). Please let me know if you think otherwise.
Also, while I do not understand the difficulties of maintaining these mocking frameworks (especially the cglib based ones), I understand the difficulties do exist and I can appreciate the sort of problems you face. My hat goes off to you all.

How do I shutdown and reconfigure an AsyncHttpClient that is using NettyAsyncHttpProvider

I'm constructing an AsyncHttpClient like this:
public AsyncHttpClient getAsyncHttpClient() {
AsyncHttpClientConfig config = new AsyncHttpClientConfig.Builder()
.setProxyServer(makeProxyServer())
.setRequestTimeoutInMs((int) Duration.create(ASYNC_HTTP_REQUEST_TIMEOUT_MIN, TimeUnit.MINUTES).toMillis())
.build();
return new AsyncHttpClient(new NettyAsyncHttpProvider(config), config);
}
This gets called once at startup, and then the return value is passed around and used in various places. makeProxyServer() is my own function to take my proxy settings an return a ProxyServer object. What I need to do is be able to change the proxy server settings and then recreate the AsyncHttpClient object. But, I don't know how to shut it down cleanly. A bit of searching on leads me to believe that close() isn't gracefull. I'm worried about spinning up a whole new executor and set of threads every time the proxy settings change. This won't be often, but my application is very long-running.
I know I can use RequestBuilder.setProxyServer() for each request, but I'd like to have it set in one spot so that all callers of my asyncHttpClient instance obey the system-wide proxy settings without each developer having to remember to do it.
What's the right way to re-configure or teardown and rebuild a Netty-based AsyncHttpClient?
The problem with using AsyncHttpClient.close() is that it shuts down the thread pool executor used by the provider, then there is no way to re-use the client without re-building it, because as per documentation, the executor instance cannot be reused once ts is shutdown. So, there is no way but re-build the client if you go that way (unless you implement your own ExecutorService that would have another shutdown logic, but it is a long way to go, IMHO).
However, from looking into the implementation of NettyAsyncHttpProvider, I can see that it stores the reference to the given AsyncHttpClientConfiginstance and calls its getProxyServerSelector() to get the proxy settings for every new NettyAsyncHttpProvider.execute(Request...) invocation (i.e. for every request executed by AsyncHttpClient).
Then, if we could make the getProxyServerSelector() return the configurable instance of ProxyServerSelector, that would do the thing.
Unfortunately, AsyncHttpClientConfig is designed to be a read-only container, instantiated by AsyncHttpClientConfig.Builder.
To overcome this limitation, we would have to hack it, using, say, "wrap/delegate" approach:
Create a new class, derived from AsyncHttpClientConfig. The class should wrap the given separate AsyncHttpClientConfig instance and implement the delegation of the AsyncHttpClientConfig getters to that instance.
To be able to return the proxy selector we want at any given point of time, we make this setting mutable in a this wrapper class and expose the setter for it.
Example:
public class MyAsyncHttpClientConfig extends AsyncHttpClientConfig
{
private final AsyncHttpClientConfig config;
private ProxyServerSelector proxyServerSelector;
public MyAsyncHttpClientConfig(AsyncHttpClientConfig config)
{
this.config = config;
}
#Override
public int getMaxTotalConnections() { return config.maxTotalConnections; }
#Override
public int getMaxConnectionPerHost() { return config.maxConnectionPerHost; }
// delegate the others but getProxyServerSelector()
...
#Override
public ProxyServerSelector getProxyServerSelector()
{
return proxyServerSelector == null
? config.getProxyServerSelector()
: proxyServerSelector;
}
public void setProxyServerSelector(ProxyServerSelector proxyServerSelector)
{
this.proxyServerSelector = proxyServerSelector;
}
}
Now, in your example, wrap your AsyncHttpClient config instance with our new wrapper and use it to configure the AsyncHttpClient:
Example:
MyAsyncHttpClientConfig myConfig = new MyAsyncHttpClientConfig(config);
return new AsyncHttpClient(new NettyAsyncHttpProvider(myConfig), myConfig);
Whenever you invoke myConfig.setProxyServerSelector(newSelector), the new request executed by NettyAsyncHttpProvider instance in your client will use the new proxy server settings.
A few hints/warnings:
This approach relies on the internal implementation of NettyAsyncHttpProvider; therefore make your own judgement on maintainability, future Netty libraries versions upgrade strategy etc. You could always look at the Netty source code before upgrading to the new version. At the current point, I personally think it is unlikely to change too much to invalidate this implementation.
You could get ProxyServerSelector for ProxyServer by using com.ning.http.util.ProxyUtils.createProxyServerSelector(proxyServer) - that's exactly what AsyncHttpClientConfig.Builder does.
The given example has no synchronization logic for accessing proxyServerSelector; you may want to add some as your application logic needs.
Maybe it is a good idea to submit a feature request for AsyncHttpClient to be able to setup a "configuration factory" for the AsyncHttpProvider so all these complications would vanish :-)
You should be holding a RequestHandle instance for all your unfinished requests. When you want to shut down, you can loop through and call isFinished() on all of them until they are all done. Then you know you can safely close it and no pending requests will be killed.
Once it's closed, just build a new one. Don't try to reuse the existing one. If you have references to it around, change those to reference a Factory that will return the current one.

Sensible unit test possible?

Could a sensible unit test be written for this code which extracts a rar archive by delegating it to a capable tool on the host system if one exists?
I can write a test case based on the fact that my machine runs linux and the unrar tool is installed, but if another developer who runs windows would check out the code the test would fail, although there would be nothing wrong with the extractor code.
I need to find a way to write a meaningful test which is not binded to the system and unrar tool installed.
How would you tackle this?
public class Extractor {
private EventBus eventBus;
private ExtractCommand[] linuxExtractCommands = new ExtractCommand[]{new LinuxUnrarCommand()};
private ExtractCommand[] windowsExtractCommands = new ExtractCommand[]{};
private ExtractCommand[] macExtractCommands = new ExtractCommand[]{};
#Inject
public Extractor(EventBus eventBus) {
this.eventBus = eventBus;
}
public boolean extract(DownloadCandidate downloadCandidate) {
for (ExtractCommand command : getSystemSpecificExtractCommands()) {
if (command.extract(downloadCandidate)) {
eventBus.fireEvent(this, new ExtractCompletedEvent());
return true;
}
}
eventBus.fireEvent(this, new ExtractFailedEvent());
return false;
}
private ExtractCommand[] getSystemSpecificExtractCommands() {
String os = System.getProperty("os.name");
if (Pattern.compile("linux", Pattern.CASE_INSENSITIVE).matcher(os).find()) {
return linuxExtractCommands;
} else if (Pattern.compile("windows", Pattern.CASE_INSENSITIVE).matcher(os).find()) {
return windowsExtractCommands;
} else if (Pattern.compile("mac os x", Pattern.CASE_INSENSITIVE).matcher(os).find()) {
return macExtractCommands;
}
return null;
}
}
Could you not pass the class a Map<String,ExtractCommand[]> instances and then make an abstract method, say GetOsName, for getting the string to match. then you could look up the match string in the map to get the extract command in getSystemSpecificExtractCommands method. This would allow you to inject a list containing a mock ExtractCommand and override the GetOsName method to return the key of your mock command, so you could test that when the extract worked, the eventBus is fired etc.
private Map<String,EvenetCommand[]> eventMap;
#Inject
public Extractor(EventBus eventBus, Map<String,EventCommand[]> eventMap) {
this.eventBus = eventBus;
this.eventMap = eventMap;
}
private ExtractCommand[] getSystemSpecificExtractCommands() {
String os = GetOsName();
return eventMap.Get(os);
}
protected GetOsName();
{
return System.getProperty("os.name");
}
I would look for some pure java APIs for manipulating rar files. This way the code will not be system dependent.
A quick search on google returned this:
http://www.example-code.com/java/rar_unrar.asp
Start with a mock framework. You'll need to refactor a bit, as you will need to ensure that some of those private and local scope properties/variables can be overridden if need be.
Then when you are testing Extract, you make sure you've mocked out the commands, and ensure that the Extract method is called on your mocked objects. You'll also want to ensure that your event got fired too.
Now to make it more testable you can use constructor or property injection. Either way, you'll need to make the private ExtractCommand arrays overriddable.
Sorry, don't have time to recode it, and post, but that should just about get you started nicely.
Good luck.
EDIT. It does sound like you are more after a functional test anyway if you want to test that it is actually extracted correctly.
Testing can be tricky, especially getting the divides right between the different types of tests and when they should be run and what their responsibilities are. This is even more so with cross-platform code.
While it's possible to think of this as 1 code base you are testing, it's really multiple code bases, the generic java code and code for each target platform, so you will need multiple tests.
To begin with unit testing, you will not be exercising the external command. Rather, each platform specific class is tested to see that it generates the correct command line, without actually executing it.
Your java class that hides all the platform specifics (which command to use) has a unit test to verify that it instantiates the correct platform specific class for a given platform. The platform can be a parameter to the core test, so multiple platforms can be "emulated". To take the unit test further, you could mock out the command implementation (e.g. having a RAR file and it's uncompressed form as part of your test data, and the command is a simple copy of the uncompressed data.)
Once these unit tests are in place and green, you then can move on to functional tests, where the real platform specific commands are executed. Of course, these functional tests have to be run on the actual platform. Each functional test corresponds to a platform specific class that knows how to create the correct commandline to unrar.
Your build is configured to exclude tests for classes that don't apply to the current platform, for example, so LinuxUnrarer is not tested on Windows. The platform independent java class is always tested, and it will instantiate the appropriate platform specific test. This gives you a integration test to see that the system works end to end.
As to cross platform UNRAR, there is a java RAR scanner, but it doesn't decompress.

Categories

Resources