this afternoon I wrote this class whose aim is give a easy way to exchange send a file over TCP Socket.
The problem it that, despite the final file size is correct, the content in wrong: precisely the destination file is made of various copies of the first buffer sent over Socket.
My class is simple: it calculates Q and R based on buffer size and sends this number together original filename to the client. I used a byte array to send data over Socket.
package it.s4sytems.java;
import java.io.*;
import java.net.*;
public class FileOverObjectStream
{
private File file;
private int bufferSize = 4*1024*1024; //4MB default, comunque รจ stabilito dal sender
private static class Info implements Serializable
{
public String fileName;
public long q;
public int r;
public int bufferSize;
}
public FileOverObjectStream(File file)
{
this.file = file;
}
public FileOverObjectStream(File file, int bufferSize)
{
this(file);
this.bufferSize = bufferSize;
}
public void sendFile(Socket socket) throws IOException
{
socket.getInputStream();
sendFile( socket.getOutputStream() );
}
public void sendFile(OutputStream outStream)throws IOException
{
sendFile( new ObjectOutputStream(outStream) );
}
public void sendFile(ObjectOutputStream objOutStream) throws IOException
{
BufferedInputStream in = new BufferedInputStream( new FileInputStream(file) );
byte[] buffer = new byte[bufferSize];
Info info = new Info();
info.fileName = file.getName();
info.bufferSize = bufferSize;
info.q = file.length() / bufferSize;
info.r = (int) file.length() % bufferSize;
objOutStream.writeObject(info);
for(long i=0; i<info.q; i++)
{
in.read(buffer);
objOutStream.writeObject(buffer);
objOutStream.flush();
}
in.read( buffer = new byte[info.r]);
objOutStream.writeObject(buffer);
objOutStream.flush();
in.close();
}
public String receiveFile(Socket socket) throws IOException, ClassNotFoundException
{
socket.getOutputStream();
return receiveFile( socket.getInputStream() );
}
public String receiveFile(InputStream inStream) throws IOException, ClassNotFoundException
{
return receiveFile( new ObjectInputStream(inStream) );
}
public String receiveFile(ObjectInputStream objInStream) throws IOException, ClassNotFoundException
{
BufferedOutputStream out = new BufferedOutputStream( new FileOutputStream(file) );
Info info = (Info) objInStream.readObject();
for(long i=0; i<info.q+1; i++)
{
byte[] buffer = (byte[]) objInStream.readObject();
out.write( buffer );
}
out.close();
return info.fileName;
}
}
I created two classes to make some try...
import it.s4sytems.java.*;
import java.io.*;
import java.net.ServerSocket;
import java.net.Socket;
public class Server
{
public static void main(String arg[]) throws IOException
{
ServerSocket ss = new ServerSocket(18000);
while(true)
{
Socket s = ss.accept();
File file = new File("G:\\HCHCK_72_5.38.part04.rar");
FileOverObjectStream sender = new FileOverObjectStream(file);
sender.sendFile(s);
s.close();
}
}
}
and client...
import it.s4sytems.java.*;
import java.io.*;
import java.net.*;
public class Client
{
public static void main(String arg[]) throws IOException, ClassNotFoundException
{
Socket s = new Socket("localhost", 18000);
String matricola = "616002424";
File directory = new File(System.getProperty("user.dir") + "\\" + matricola);
directory.mkdir();
File file = File.createTempFile("7897_", null, directory);
String originalName = new FileOverObjectStream(file).receiveFile(s);
System.out.println(originalName);
s.close();
File file2 = new File(directory, originalName);
System.out.println( file.renameTo( file2 ) );
System.out.println( file.getAbsoluteFile());
System.out.println( file2.getAbsoluteFile());
}
}
Probably it's a stupid thing, but I can't see it, so I need your help, please.
Thank you
I don't think ObjectOutputStream is suitable in your use case. Unless I missed something. In general, try to use some good library for IO such as Apache Commons IO. It has methods that would always do the right thing. Look at IOUtils for example.
Some errors to highlight (they would not happen with good library)
in.read(buffer) is not guaranteed to read exact number of bytes. You must check its result and only write correct number.
You write buffer object to ObjectOutputStream with writeObject. That writes serialized byte buffer not raw sequence of bytes.
Your ObjectInput/OutputStream code is flawed in all the ways Alex noted. I wouldn't use it at all, I would just use raw I/O. The canonical way to copy a stream in Java is as follows:
int count;
byte[] buffer = new byte[8192]; // or more, but megabytes is pointless as the network will packetize anyway
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
Use that same code when both sending and receiving the file. If you want to send > 1 file per connection, you need to prefix all that by sending the file name and length, which you can do with DataOutputStream.writeUTF()/writeLong(), and DataInputStream.readUTF()/readLong() at the receiver, and modify the loop control to read exactly that many bytes:
long remaining = size; // the file size read from the network
while ((count = in.read(buffer, 0, remaining > buffer.length ? buffer.length : (int)remaining)) > 0)
{
out.write(buffer, 0, count);
remaining -= count;
}
Related
I was trying to decode the JWT payload in java but this payload is compressed/deflated
"zip": "DEF"
java.util.zip.DataFormatException: incorrect header check
private static byte[] decompress(byte[] value) throws DataFormatException {
ByteArrayOutputStream bos = new ByteArrayOutputStream(value.length);
Inflater decompressor = new Inflater();
try {
decompressor.setInput(value);
final byte[] buf = new byte[1024];
while (!decompressor.finished()) {
int count = decompressor.inflate(buf);
bos.write(buf, 0, count);
}
} finally {
decompressor.end();
}
return bos.toByteArray();
}
public static void main(String[] args) throws Exception {
String payload = "7VPbjtMwEP2X4TUXO9CumjdYkFghoZVaFiHUB9eZNka-RLYTUVb5d8ZuKxW09AuQ8jL2mTPnHGeeYZLQPkM8Dgjtd-hjHEJb18EIH3sUOvaVFL4Lr6SbVMdXUNzAnIoyFTdxypjRql8iKmdhW4D02KGNSuj1uPuBMiZJ-175J_QhYVp4U7GKE2k6fTfaTmPCeAxu9BI3WT6cL4qzHZBOa2JLDAXQAH8kj8Q8av3FawJc-ltGgEvxAvEjSaV-Allh8EQijNLEB-vN280HujmoCW3K8OvHh_Wnb7CdydlOkfX3IiYSvlqxkr2mD-a5eFEGvy3j4Tq3AkIUcQzZpxk0RkypT0JKZfHedZlBuk7ZQ1YcjiGiIXh6GHqXXt9Vzh_qFGkdVFfL6ScRyNwJDbuDeTsXMJy9Zzl79GiTtuvoEgj93nmDPk8SMjqfGjoVBi1SSvdP68deeCPkkdxTMk7K0WeyFM9GmdPQhpdsWTZLEqJd_DyaXeIE_s_Imv-RnSJb_BUZS5ltZ8oNlCAtfNks2HLBOKe_eLf_80CFcHaZN1ZFXopBVXIKl8V15nqR64nXec3n3w";
byte[] byt = Base64.getUrlDecoder().decode(new String(payload).getBytes("UTF-8"));
byte[] b = decompress(byt);
String s = new String(b, StandardCharsets.UTF_8);
}
Some other folks in other programming language was able to crack this out using this, wondering how will I accomplish this in java?
const decompressedCard = zlib.inflateRawSync(decodedPayload);
const card = JSON.parse(decompressedCard.toString());
Ususally compressed payload is used in encrypted JWTs (JWE), but SMART Health Cards also use it in signed tokens (JWS). In both cases, the DEFLATE format as defined in RFC1951 is used. For Zlib (as shown in the example on the bottom of the question) you have to use deflateRaw/inflateRaw (DEFLATE without any Zlib or gz headers).
In case of the java.util.zip.Inflater, initializing the inflater with
Inflater decompressor = new Inflater(true);
is setting the nowrap parameter to true to decompress in raw mode (without header) data,
which is equal to using inflateRaw in Node.js.
(see also https://docs.oracle.com/javase/7/docs/api/java/util/zip/Inflater.html)
With this setting, the code in the question works fine and the given example data can be inflated to a JSON.
The thing about nowrap is correct I think, but nonetheless, I wasn't able to get your code working until I fixed the corrupt input (mentioned above) and did this:
import java.util.Base64;
import java.util.zip.GZIPInputStream;
import java.io.ByteArrayOutputStream;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
public class Decomp2 {
public static byte[] gunzip(byte[] value) throws IOException {
byte[] result = null;
ByteArrayOutputStream out = new ByteArrayOutputStream();
byte[] buf = new byte[1024];
int numRead = -1;
try (GZIPInputStream in = new GZIPInputStream(new ByteArrayInputStream(value))) {
while ((numRead = in.read(buf)) > -1) {
out.write(buf, 0, numRead);
}
result = out.toByteArray();
}
return result;
}
public static void main(String[] args) throws Exception {
// Data gzipped and b64url-encoded
String payload = "H4sIAKow-GAAA-1Ty27bMBC89zO2Vz1ItXZg3dokQIICRQC7CYrCB5paWwxIUSApoW6gf--StgG3SPwFAXRZcnZ2Zqh9gVFC_QJh3yPUv6ANofd1WXojXGhR6NAWUrjGf5R2VA1fQHYBcyjyWFzEKWOGTv0RQdkO1hlIhw12QQm9HDbPKEOUtG2Ve0TnI6aGzwUrOJHG069D12iMGIfeDk7iKsmH40V2tAPSak1skSEDGuD25JGYB61_OE2AU3_NCHAqXiF-IKnUT6BOGDyQCKM08cFy9WV1Szc7NWIXM3y6u19--wnriZxtFFm_ESGS8MWC5ewTfTBN2asy-GUZ9-e5ZeCDCINPPk2vMWBMfRRSqg6vbZMYpG1Ut0uK_d4HNASPD0Pv0uqrwrpdGSMtvWpKOf4mApk6oWJXMK2nDPqj9yRniw67qO08ughCt7XOoEuThAzWxYZG-V6LmNL14_KhFc4IuSf3lIyVcnCJLMazUuYwtOI5m-fVnIRoG74PZhM5gb8ZWfUe2SGy2X-RsZjZeqLcQAnSwufVjM1njHP6izfbfw-U90eXaWNV4LnoVSFHf1pca84XuRx5mdZ8-vAX5R6TWUMEAAA=";
byte[] byt = Base64.getUrlDecoder().decode(payload.getBytes("UTF-8"));
byte[] b = gunzip(byt);
String s = new String(b, StandardCharsets.UTF_8);
System.out.println(s);
}
}
I'm writing a simple TCP server in java to receive image data from a client and then process it. The client is connected to the server over a 25Gbit network but the data transfer speed is limited around 4.5Gbit/s.
The client (winserv 2016) is recording data from an sCMOS camera at 100fps (8MB each frame) and then write the data directly to the TCP socket. The server (Centos7) then read the data and write it out. The server read data image by image. The downstream disk writing throughput is not the issue since I tried without writing the data out and see the same kind of performance. Using iperf between the windows client and linux server gives the expected bandwidth (20+ Gbit/s). Is this normal to have such a bit network speed difference between iperf and TCP traffic ?
private void writefile(int zsize,int ysize,int xsize,String filename,int port) throws IOException{
InetAddress add = InetAddress.getByName(config.ipadd);
ServerSocket socket = new ServerSocket(port, 10, add);
Socket clientsocket;
clientsocket = socket.accept();
FileOutputStream fos = new FileOutputStream(filename);
DataInputStream in = new DataInputStream(new BufferedInputStream(clientsocket.getInputStream()));
int chunksize = 2*xsize*ysize;
byte[] frame = new byte[chunksize];
long t0 = System.currentTimeMillis();
for (int i=0;i<zsize;i++){
int pos = 0;
while (pos<chunksize-1){
int len = in.read(frame,pos,chunksize-pos);
pos+= len;
}
fos.write(frame);
}
fos.close();
long t1 = System.currentTimeMillis();
printlock.lock();
System.out.println((long)zsize*(long)xsize*(long)ysize*2d/(double)(t1-t0));
System.out.printf("Data transfer speed: %f MB/s\n", (long)zsize*(long)xsize*(long)ysize*2d/((double)(t1-t0)/1000d)/1024d/1024d);
printlock.unlock();
in.close();
socket.close();
clientsocket.close();
try{
ports.put(port);
}
catch (InterruptedException it){
it.printStackTrace();
}
return;
}
I have also tried to use the nio approach (following the example from https://pzemtsov.github.io/2015/01/19/on-the-benefits-of-stream-buffering-in-Java.html). However the performance is rather similar.
import java.io.FileOutputStream;
import java.io.IOException;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.ByteChannel;
import java.nio.channels.ServerSocketChannel;
import java.nio.channels.SocketChannel;
public class datagetter {
private static ByteBuffer buf;
private String hostname = null;
private int port = 0;
private int framesize = 2048*2048;
private int numframe = 500;
private static void ensure (int len, ByteChannel chan) throws IOException
{
if (buf.remaining () < len) {
buf.compact ();
buf.flip ();
do {
buf.position (buf.limit ());
buf.limit (buf.capacity ());
chan.read (buf);
buf.flip ();
} while (buf.remaining () < len && buf.limit()!=buf.capacity());
}
}
public datagetter(String hostname, int port,int framesize,int numframe){
this.hostname = hostname;
this.port = port;
this.framesize = framesize;
this.numframe = numframe;
}
public void receiveandwrite(String filename) throws IOException{
buf = ByteBuffer.allocateDirect(framesize);
FileOutputStream fos = new FileOutputStream(filename);
ServerSocketChannel chanserv = ServerSocketChannel.open();
chanserv.socket().bind(new InetSocketAddress(hostname,this.port));
SocketChannel chan = chanserv.accept();
buf.limit(0);
byte[] msg = new byte[framesize];
for (int i=0;i<numframe;i++){
ensure(framesize,chan);
buf.get(msg,0,framesize);
fos.write(msg);
}
chanserv.close();
fos.close();
}
}
I understand that there would be network protocol overhead when comparing iperf to real data transfer but I'm not sure why the throughput discrepancy is so large. Is this normal to expect such a big network speed difference?
I'm getting error of FileNotFound. Basically, I'm trying to upload file from client to server.
Please, help me with it.
This is client.java class
package ftppackage;
import java.net.*;
import java.io.*;
public class Client {
public static void main (String [] args ) throws IOException {
Socket socket = new Socket("127.0.0.1",15123);
File transferFile = new File ("D:\\AsiaAd.wmv");
byte [] bytearray = new byte [(int)transferFile.length()];
FileInputStream fin = new FileInputStream(transferFile);
BufferedInputStream bin = new BufferedInputStream(fin);
bin.read(bytearray,0,bytearray.length);
OutputStream os = socket.getOutputStream();
System.out.println("Sending Files...");
os.write(bytearray,0,bytearray.length);
os.flush();
socket.close();
System.out.println("File transfer complete");
}
}
And this is my server.java class
package ftppackage;
import java.net.*;
import java.io.*;
public class Server {
public static void main (String [] args ) throws IOException {
int filesize=1022386;
int bytesRead;
int currentTot = 0;
ServerSocket serverSocket = new ServerSocket(15123);
Socket socket = serverSocket.accept();
System.out.println("Accepted connection : " + socket);
byte [] bytearray = new byte [filesize];
InputStream is = socket.getInputStream();
FileOutputStream fos = new FileOutputStream("E:\\0\\"); // it is creating new file not copying the one from client
BufferedOutputStream bos = new BufferedOutputStream(fos);
bytesRead = is.read(bytearray,0,bytearray.length);
currentTot = bytesRead;
do {
bytesRead = is.read(bytearray, currentTot, (bytearray.length-currentTot));
if(bytesRead >= 0)
currentTot += bytesRead;
} while(bytesRead > -1);
bos.write(bytearray, 0 , currentTot);
bos.flush();
bos.close();
socket.close();
}
}
Plus, guide me how do add progress bar in it with percentage. I read about SwingWorker here but unable to implement it as I'm totally new with threading concepts.
Thank you for considering my questions.
FileNotFoundException is something you will get if you point the File Object to some File which is not existing in that path. it means what ever the file you are trying to upload in not there in the specified path. SO make sure you give a valid path.
I know Mule has great support for gzip compression of data using the element. However the client now wants zip compression since the file has to be placed on an FTP as a zip compressed file :(
I encounter difficulties in mule with following scenario:
I created a Spring bean where a file comes in. I want to compress this file using the ZipOutputStream class and pass it towards our ftp.
This is my flow configuration:
<flow name="testFlow" initialState="stopped">
<file:inbound-endpoint path="${home.dir}/out" moveToDirectory="${hip.dir}/out/hist" fileAge="10000" responseTimeout="10000" connector-ref="input"/>
<component>
<spring-object bean="zipCompressor"/>
</component>
<set-variable value="#[message.inboundProperties.originalFilename]" variableName="originalFilename" />
<ftp:outbound-endpoint host="${ftp.host}" port="${ftp.port}" user="${ftp.username}" password="${ftp.password}" path="${ftp.root.out}" outputPattern="#[flowVars['originalFilename']].zip" />
</flow>
This is the code of my zipCompressor:
#Component
public class ZipCompressor implements Callable {
private static final Logger LOG = LogManager.getLogger(ZipCompressor.class.getName());
#Override
#Transactional
public Object onCall(MuleEventContext eventContext) throws Exception {
if (eventContext.getMessage().getPayload() instanceof File) {
final File srcFile = (File) eventContext.getMessage().getPayload();
final String fileName = srcFile.getName();
final File zipFile = new File(fileName + ".zip");
try {
// create byte buffer
byte[] buffer = new byte[1024];
FileOutputStream fos = new FileOutputStream(zipFile);
ZipOutputStream zos = new ZipOutputStream(fos);
FileInputStream fis = new FileInputStream(srcFile);
// begin writing a new ZIP entry, positions the stream to the start of the entry data
zos.putNextEntry(new ZipEntry(srcFile.getName()));
int length;
while ((length = fis.read(buffer)) > 0) {
zos.write(buffer, 0, length);
}
zos.closeEntry();
// close the InputStream
fis.close();
// close the ZipOutputStream
zos.close();
}
catch (IOException ioe) {
LOG.error("Error creating zip file" + ioe);
}
eventContext.getMessage().setPayload(zipFile);
}
return eventContext.getMessage();
}
}
I wrote a unit test and the compression works great. A file is indeed transferred to the FTP with the correct name, but the zip file is invalid and by opening it in NotePad++, it contains just the original file name.
I think I'm doing something wrong with passing the zip file back to the mule flow, but I'm stuck at the moment so any help would be greatly appreciated!
I have implemented the transformer for this
package com.test.transformer;
import java.io.IOException;
import java.io.InputStream;
import java.util.zip.ZipEntry;
import java.util.zip.ZipOutputStream;
import org.apache.commons.io.IOUtils;
import org.apache.commons.io.output.ByteArrayOutputStream;
import org.mule.api.MuleMessage;
import org.mule.api.transformer.TransformerException;
import org.mule.transformer.AbstractMessageTransformer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class ZipTransformer
extends AbstractMessageTransformer
{
private static final Logger log = LoggerFactory.getLogger(ZipTransformer.class);
public static final int DEFAULT_BUFFER_SIZE = 32768;
public static byte[] MAGIC = { 'P', 'K', 0x3, 0x4 };
public ZipTransformer()
{
registerSourceType(InputStream.class);
registerSourceType(byte[].class);
}
public Object transformMessage(MuleMessage message, String outputEncoding)
throws TransformerException
{
Object payload = message.getPayload();
try{
byte[] data;
if (payload instanceof byte[])
{
data = (byte[]) payload;
}
else if (payload instanceof InputStream) {
data = IOUtils.toByteArray((InputStream)payload);
}
else if (payload instanceof String)
{
data = ((String) payload).getBytes(outputEncoding);
}
else
{
data = muleContext.getObjectSerializer().serialize(payload);
}
return compressByteArray(data);
}catch (Exception ioex)
{
throw new TransformerException(this, ioex);
}
}
public Object compressByteArray(byte[] bytes) throws IOException
{
if (bytes == null || isCompressed(bytes))
{
if (logger.isDebugEnabled())
{
logger.debug("Data already compressed; doing nothing");
}
return bytes;
}
if (logger.isDebugEnabled())
{
logger.debug("Compressing message of size: " + bytes.length);
}
ByteArrayOutputStream baos = null;
ZipOutputStream zos = null;
try
{
baos = new ByteArrayOutputStream(DEFAULT_BUFFER_SIZE);
zos = new ZipOutputStream(baos);
zos.putNextEntry(new ZipEntry("test.txt"));
zos.write(bytes, 0, bytes.length);
zos.finish();
zos.close();
byte[] compressedByteArray = baos.toByteArray();
baos.close();
if (logger.isDebugEnabled())
{
logger.debug("Compressed message to size: " + compressedByteArray.length);
}
return compressedByteArray;
}
catch (IOException ioex)
{
throw ioex;
}
finally
{
IOUtils.closeQuietly(zos);
IOUtils.closeQuietly(baos);
}
}
public boolean isCompressed(byte[] bytes) throws IOException
{
if ((bytes == null) || (bytes.length < 4 ))
{
return false;
}
else
{
for (int i = 0; i < MAGIC.length; i++) {
if (bytes[i] != MAGIC[i]) {
return false;
}
}
return true;
}
}
}
Used it as
<custom-transformer class="com.test.transformer.ZipTransformer" doc:name="file zip transformer"/>
As of now sets file name as test.txt. you can change is using any property or variable.
Hope this helps.
A simpler way to do it is to use the gzip transformer in mule to compress the file. Note that you have to do it through the xml.
<gzip-compress-transformer/>
In the ZipTransformer constructor, the following is deprecated.
registerSourceType(InputStream.class);
registerSourceType(byte[].class);
Use this instead:
registerSourceType(DataTypeFactory.create(InputStream.class));
registerSourceType(DataTypeFactory.create(byte[].class));
I have created a simple Client/Server program where the client takes a file from command line arguments. The client then sends the file to the server, where it is compressed with GZIP and sent back to the client.
The server program when ran first is fine, and produces no errors but after running the client I get the error.
I am getting an error saying the connection is reset, and i've tried numerous different ports so i'm wondering if there is something wrong with my code or time at which i've closed the streams?
Any help would be greatly appreciated!
EDIT - Made changes to both programs.
Client:
import java.io.*;
import java.net.*;
//JZip Client
public class NetZip {
//Declaring private variables.
private Socket socket = null;
private static String fileName = null;
private File file = null;
private File newFile = null;
private DataInputStream fileIn = null;
private DataInputStream dataIn = null;
private DataOutputStream dataOut = null;
private DataOutputStream fileOut = null;
public static void main(String[] args) throws IOException {
try {
fileName = args[0];
}
catch (ArrayIndexOutOfBoundsException error) {
System.out.println("Please Enter a Filename!");
}
NetZip x = new NetZip();
x.toServer();
x.fromServer();
}
public void toServer() throws IOException{
while (true){
//Creating socket
socket = new Socket("localhost", 4567);
file = new File(fileName);
//Creating stream to read from file.
fileIn = new DataInputStream(
new BufferedInputStream(
new FileInputStream(
file)));
//Creating stream to write to socket.
dataOut = new DataOutputStream(
new BufferedOutputStream(
socket.getOutputStream()));
byte[] buffer = new byte[1024];
int len;
//While there is data to be read, write to socket.
while((len = fileIn.read(buffer)) != -1){
try{
System.out.println("Attempting to Write " + file
+ "to server.");
dataOut.write(buffer, 0, len);
}
catch(IOException e){
System.out.println("Cannot Write File!");
}
}
fileIn.close();
dataOut.flush();
dataOut.close();
}
}
//Read data from the serversocket, and write to new .gz file.
public void fromServer() throws IOException{
dataIn = new DataInputStream(
new BufferedInputStream(
socket.getInputStream()));
fileOut = new DataOutputStream(
new BufferedOutputStream(
new FileOutputStream(
newFile)));
byte[] buffer = new byte[1024];
int len;
while((len = dataIn.read(buffer)) != -1){
try {
System.out.println("Attempting to retrieve file..");
fileOut.write(buffer, 0, len);
newFile = new File(file +".gz");
}
catch (IOException e ){
System.out.println("Cannot Recieve File");
}
}
dataIn.close();
fileOut.flush();
fileOut.close();
socket.close();
}
}
Server:
import java.io.*;
import java.net.*;
import java.util.zip.GZIPOutputStream;
//JZip Server
public class ZipServer {
private ServerSocket serverSock = null;
private Socket socket = null;
private DataOutputStream zipOut = null;
private DataInputStream dataIn = null;
public void zipOut() throws IOException {
//Creating server socket, and accepting from other sockets.
try{
serverSock = new ServerSocket(4567);
socket = serverSock.accept();
}
catch(IOException error){
System.out.println("Error! Cannot create socket on port");
}
//Reading Data from socket
dataIn = new DataInputStream(
new BufferedInputStream(
socket.getInputStream()));
//Creating output stream.
zipOut= new DataOutputStream(
new BufferedOutputStream(
new GZIPOutputStream(
socket.getOutputStream())));
byte[] buffer = new byte[1024];
int len;
//While there is data to be read, write to socket.
while((len = dataIn.read(buffer)) != -1){
System.out.println("Attempting to Compress " + dataIn
+ "and send to client");
zipOut.write(buffer, 0, len);
}
dataIn.close();
zipOut.flush();
zipOut.close();
serverSock.close();
socket.close();
}
public static void main(String[] args) throws IOException{
ZipServer run = new ZipServer();
run.zipOut();
}
}
Error Message:
Exception in thread "main" java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:196)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at java.io.DataInputStream.read(DataInputStream.java:100)
at ZipServer.<init>(ZipServer.java:38)
at ZipServer.main(ZipServer.java:49)
First, the error occurs because the client fails and ends before sending any data, so that
the connection is closed at the time the server wants to read.
The error occurs because you assign the File objects to unused local variables (did your compiler not warn?)
public File file = null;
public File newFile = null;
public static void main(String[] args) throws IOException {
try {
String fileName = args[0];
File file = new File(fileName);
File newFile = new File(file +".gz");
}
catch (ArrayIndexOutOfBoundsException error) {
System.out.println("Please Enter a Filename!");
}
but In your toServer method you use the class variable file as parameter for FileInputStream and this variable is null and this results in an error which ends the program.
Second, if you finished the writing to the outputstream, you should call
socket.shtdownOutput();
Otherwise, the server tries to read until a timeout occurs.
Problem is that server is not able to download apache maven.
So what you can do is just copy the apache maven folder and paste it in the wrapper folder inside the project.
It will manually download the apache maven, and it will definitely work.