I am trying to create a custom sftp server using Apache Mina SSHD. My code so far:
SshServer sshd = SshServer.setUpDefaultServer();
sshd.setPort(PORT_NUMBER);
sshd.setKeyPairProvider(new SimpleGeneratorHostKeyProvider(Paths.get("keys/private_key.ppk")));
SftpSubsystemFactory factory = new SftpSubsystemFactory.Builder()
.build();
factory.addSftpEventListener(new BasicSftpEventListener());
sshd.setSubsystemFactories(Collections.singletonList(factory));
sshd.setShellFactory(new ProcessShellFactory("/bin/sh", "-i", "-l"));
sshd.start();
As you can see, I implemented my own SftpEventListener:
public class BasicSftpEventListener implements SftpEventListener {
#Override
public void removing(ServerSession session, Path path) throws IOException {
System.out.println("Removin");
}
#Override
public void removed(ServerSession session, Path path, Throwable thrown) throws IOException {
System.out.println("removed");
}
When I want to remove file, it executes my removing and removed listeners, BUT the remove operation proceeds and the file is deleted.
Is there a way how to stop this from happening?
Thanks for help!
If you want to block delete actions, you will need to interrupt the flow of the removing method with an exception. This will tell Mina to stop and not remove the file. I would recommend using java.lang.UnsupportedOperationException for this:
#Override
public void removing(ServerSession session, Path path) throws UnsupportedOperationException{
throw new UnsupportedActionException("Removing files is not permitted.");
}
Related
I've read CDI 2.0 specification (JSR 365) and found out the existence of the #Observes(during=AFTER_SUCCESS) annotation, but it actually requires a custom event to be defined in order to work.
This is what i've got:
//simple """transactional""" file system manager using command pattern
#Transactional(value = Transactional.TxType.REQUIRED)
#TransactionScoped
#Stateful
public class TransactionalFileSystemManager implements SessionSynchronization {
private final Deque<Command> commands = new ArrayDeque<>();
public void createFile(InputStream content, Path path, String name) throws IOException {
CreateFile command = CreateFile.execute(content, path, name);
commands.addLast(command);
}
public void deleteFile(Path path) throws IOException {
DeleteFile command = DeleteFile.execute(path);
commands.addLast(command);
}
private void commit() throws IOException{
for(Command c : commands){
c.confirm();
}
}
private void rollback() throws IOException{
Iterator<Command> it = commands.descendingIterator();
while (it.hasNext()) {
Command c = it.next();
c.undo();
}
}
#Override
public void afterBegin() throws EJBException{
}
#Override
public void beforeCompletion() throws EJBException{
}
#Override
public void afterCompletion(boolean commitSucceeded) throws EJBException{
if(commitSucceeded){
try {
commit();
} catch (IOException e) {
throw new EJBException(e);
}
}
else {
try {
rollback();
} catch (IOException e) {
throw new EJBException(e);
}
}
}
}
However, I want to adopt a CDI-only solution so I need to remove anything EJB related (including the SessionSynchronization interface). How can i achieve the same result using CDI?
First the facts: the authoritative source for this topic is the Java Transaction API (JTA) specification. Search for it online, I got this.
Then the bad news: In order to truly participate in a JTA transaction, you either have to implement a connector according to the Java Connector Architecture (JCA) specification or a XAResource according to JTA. Never done any of them, I am afraid both are going to be hard. Nevertheless, if you search, you may find an existing implementation of a File System Connector.
Your code above will never accomplish true 2-phase commit because, if your code fails, the transaction is already committed, so the application state is inconsistent. Or, there is a small time window when the real transaction is committed but the file system change have not beed executed, again the state is inconsistent.
Some workarounds I can think of, none of which solves the consistency problem:
Persist the File System commands in a database. This ensures that they are enqueued transactionally. A scheduled job wakes up and actually tries to execute the queued FS commands.
Register a Synchronization with the current Transaction, fire an appropriate event from there. Your TransactionalFileSystemManager observes this event, no during attribute needed I guess.
I would like to stream out logs via api endpoint /logs for dropwizard.
It is harder than what I thought. Dropwizard would read in the configuration via config.yml and keep those information as private. And I have no idea where would I be able to find the logger that logs everything?
Am I missing something?
Is there another way to do this?
This is a streaming example, you can also read up on this here:
calling flush() on Jersey StreamingOutput has no effect
And
Example of using StreamingOutput as Response entity in Jersey
The code:
public class StreamingTest extends io.dropwizard.Application<Configuration> {
#Override
public void run(Configuration configuration, Environment environment) throws Exception {
environment.jersey().property(ServerProperties.OUTBOUND_CONTENT_LENGTH_BUFFER, 0);
environment.jersey().register(Streamer.class);
}
public static void main(String[] args) throws Exception {
new StreamingTest().run("server", "/home/artur/dev/repo/sandbox/src/main/resources/config/test.yaml");
}
#Path("/log")
public static class Streamer {
#GET
#Produces("application/octet-stream")
public Response test() {
return Response.ok(new StreamingOutput() {
#Override
public void write(OutputStream output) throws IOException, WebApplicationException {
while (true) {
output.write("Test \n".getBytes());
output.flush();
try {
Thread.sleep(1000); // simulate waiting for stream
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}).build();
}
}
}
Which does what you want.
A few points The line evironment.jersey().property(ServerProperties.OUTBOUND_CONTENT_LENGTH_BUFFER, 0); is important. It tells the server not to buffer the output before returning it to the caller.
Additionally, you need to explicitly flush out the output by calling output.flush();
However, this seems to be the wrong way to ship logs. Have you ever looked into i.e. logstash? Or network based appenders that stream the logs directly to where you need them to be?
Hope that helps,
--artur
I'm trying to set up a Camel route for transferring files over HTTP. I'm also trying to understand the concept as I'm new to this.
When I code something like below, does that mean I'm routing a simple message over HTTP? Could I call Jetty the consumer in this case? I'm able to run the below code and call the browser and see the message successfully.
from("jetty://http://localhost:32112/greeting")
.setBody(simple("Hello, world!"));
However, I want to send a simple message(eventually an XML) over HTTP following which I would want to save it on disk and analyse it further. Should the code like below work?
CamelContext context = new DefaultCamelContext();
ProducerTemplate template = context.createProducerTemplate();
template.sendBody("direct:start", "This is a test message");
from("direct:start")
.to("jetty://localhost:32112/greeting");
from("jetty://http://localhost:32112/greeting")
.to("direct:end");
Should I be not using direct:start here for parsing XMLs?
Thanks a lot for the help.
first you have to create your routes and start your context. Then you can send messages via your template.
The route could look like this
from("jetty:http://0.0.0.0:32112/greeting")
.routeId("xml-converter-route").autoStartup(false)
.bean(xmlConverterBean, "convertXmlMethodToBeCalledInBean()")
;
If you just want to transfer data and nothing else use restlet or netty-http4. More lightweight than jetty.
from("restlet:/http://localhost:32112/greeting").convertBodyTo(String.class).log(LoggingLevel.INFO, "filetransfer", "log of body: ${body} and headers ${headers}").to("file://C:/test?fileName=body.txt");
Here's a camel test which may help you understand how these components work.
public class CamelRESTExampleTest extends CamelTestSupport {
Logger LOG = LoggerFactory.getLogger(CamelRESTExampleTest.class);
#Override
protected RouteBuilder createRouteBuilder() {
return new RouteBuilder() {
public void configure() {
// Create a service listening on port 8080
from("restlet:http://localhost:8080/xmlFileService?restletMethod=post")
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
String rawXML = exchange.getIn().getBody(String.class);
LOG.info("rawXML=" + rawXML);
}
});
// Read files from the local directory and send to the service.
// Create a test.xml file in this directory and it will be read in
from("file:src/test/resources/data?noop=true")
.to("restlet:http://localhost:8080/xmlFileService?restletMethod=post");
}
};
}
#Test
public void test() throws InterruptedException {
// Give the route time to complete
TimeUnit.SECONDS.sleep(5);
}
}
Here is what I would like to achieve:
On one hand, I have an Oracle database. On the other hand, a "simple" Java application (let's call it "App").
And in the middle, an embedded ApacheDS in Java. The idea is to access that database through the embedded LDAP server.
At the moment, I'm able to connect "App" to the embedded LDAP Server, send parameters to it and execute some sql in the Oracle database.
But the problem is that I can't get the result back to "App".
Apparently, I should use my own "SearchHandler", but I can't figure out how to do it.
I hope my explanations are clear enough. If not, I can try to give more details.
server.setSearchHandler(new LdapRequestHandler<InternalSearchRequest>() {
#Override
public void handle(LdapSession ls, InternalSearchRequest t) throws Exception {
//Getting data from Oracle database
System.out.println(dataFromDatabase);
}
});
A little bit late, but you are on the right path.
I am doing more or less the same (but with BindRequestHandler), using ApacheDS as a LDAP proxy. I'm doing it using version 2.0.0-M23.
I extended it like this:
public class LoggerBindRequestHandler extends BindRequestHandler {
private static final Logger LOGGER = LoggerFactory.getLogger(LoggerBindRequestHandler.class);
#Override
public void handle(LdapSession session, BindRequest request) throws Exception {
LOGGER.debug(session.toString());
LOGGER.debug(request.toString());
super.handle(session, request);
}
#Override
public void handleSaslAuth(LdapSession session, BindRequest request) throws Exception {
LOGGER.debug(session.toString());
LOGGER.debug(request.toString());
super.handleSaslAuth(session, request);
}
#Override
public void handleSimpleAuth(LdapSession session, BindRequest request) throws Exception {
LOGGER.debug(session.toString());
LOGGER.debug(request.toString());
super.handleSimpleAuth(session, request);
}
}
Then I set the ldap server
ldapServer.setBindHandlers(new LoggerBindRequestHandler(), new LoggerBindResponseHandler());
And that's it.
As far I noticed, you are missing the ResponseHandler. This is why you are able to send commands to oracle, but do not send a response.
I am developping a Swing application that needs to communicate with a distant HTTP server. That application can be potentially used behind proxies.
I must then :
- detect automatically network proxy (potentially several on the same network)
- let the user manually enter a proxy configuration.
I want to write an integration test to validate thoses aspects, without having to install proxies on every CI machines and every developper machine.
How I see things :
integration test (with junit) start an "embedded" proxy (#BeforeClass) and a somewhat dummy http server
my tests (#Test)
test that this proxy can be detected automatically and open a connection to my dummy http server and successfully retrieve datas from it
manually set the proxy and perform the same test as above
I have heard about the "littleProxy" component but didn"t tried it yet.
Can anyone shed some advice / help / guidance regarding the best approach to solve my problem ?
I would consider whether you are testing the right thing. You don't need to test proxy servers or Java's network classes.
Consider this utility type for reading data from a URL:
public final class Network {
public interface UrlOpener {
public InputStream open(URL url) throws IOException;
}
private static UrlOpener urlOpener = new UrlOpener() {
public InputStream open(URL url) throws IOException {
return url.openStream();
}
};
public static InputStream openUrl(URL url) throws IOException {
return urlOpener.open(url);
}
public static void setUrlOpener(UrlOpener urlOpener) {
Network.urlOpener = urlOpener;
}
}
This can be used as an abstraction layer between your code and Java's network I/O:
public class SomeType {
public void processData(URL url) throws IOException {
InputStream input = Network.openUrl(url);
// process the data
}
}
Your tests use this layer to mock out the data:
#Before public void setup() throws IOException {
final URL mockUrl = this.getClass().getResource("/foo/bar.txt");
Network.UrlOpener opener = Mockito.mock(Network.UrlOpener.class);
Answer<InputStream> substituteUrl = new Answer<InputStream>() {
public InputStream answer(InvocationOnMock invocation) throws Throwable {
return mockUrl.openStream();
}
};
Mockito.when(opener.open(Mockito.any(URL.class))).then(substituteUrl);
Network.setUrlOpener(opener);
}
#Test public void testSomething() throws IOException {
SomeType something = new SomeType();
something.processData(new URL("http://example.com"));
}
This saves any mucking around with firewalls etc.
In order for this approach to work you would want to have confidence in a set of recorded transactions from real servers to use in your tests.
This approach can be complemented with a dedicated machine running more comprehensive integration tests.