Unable to attach file to issue in jira via rest api Java - java

I want to attach multiple files to issue. I'm able to create issue successfully however i am facing problem in attaching documents after creating issue. I have referred to this link SOLVED: attach a file using REST from scriptrunner
I am getting 404 error even though issue exists and also user has all the permissions.
File fileToUpload = new File("D:\\dummy.txt");
InputStream in = null;
try {
in = new FileInputStream(fileToUpload);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
HttpResponse < String > response3 = Unirest
.post("https://.../rest/api/2/issue/test-85/attachments")
.basicAuth(username, password).field("file", in , "dummy.txt")
.asString();
System.out.println(response3.getStatus());
here test-85 is a issueKey value.
And i am using open-unirest-java-3.3.06.jar. Is the way i am attaching documents is correct?

I am not sure how open-unirest manages its fields, maybe it tries to put them as json field, rather than post content.
I've been using Rcarz's Jira client. It's a little bit outdated but it still works.
Maybe looking at its code will help you, or you can just use it directly.
The Issue class:
public JSON addAttachment(File file) throws JiraException {
try {
return restclient.post(getRestUri(key) + "/attachments", file);
} catch (Exception ex) {
throw new JiraException("Failed add attachment to issue " + key, ex);
}
}
And in RestClient class:
import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
import org.apache.http.entity.mime.MultipartEntity;
import org.apache.http.entity.mime.content.FileBody;
public JSON post(String path, File file) throws RestException, IOException, URISyntaxException {
return request(new HttpPost(buildURI(path)), file);
}
private JSON request(HttpEntityEnclosingRequestBase req, File file) throws RestException, IOException {
if (file != null) {
File fileUpload = file;
req.setHeader("X-Atlassian-Token", "nocheck");
MultipartEntity ent = new MultipartEntity();
ent.addPart("file", new FileBody(fileUpload));
req.setEntity(ent);
}
return request(req);
}
So I'm not sure why you're getting a 404, Jira is sometime fuzzy and not really clear about its error, try printing the full error, or checking Jira's log if you can. Maybe it's just the "X-Atlassian-Token", "nocheck", try adding it.

Related

Vaadin Upload UnsupportedOperationException issue while creating a File

I'm trying to get a midi file through a form in Vaadin, but when I try to get this File into a File class and getting a UnsupportedOperationException. This is happening in the File midiFile = fileData.getFile();
java.lang.UnsupportedOperationException: class java.io.ByteArrayOutputStream not supported. Use a UploadOutputStream
In the form it seems that the file has been loaded, but there was an error as trying to generate the File. I don't know why is this happening as I follow the methods in Vaadin documentation to get the file from the Upload. And I don't understand why it says in this exception "Happens if outputBuffer is not an UploadOutputStream".
https://vaadin.com/api/platform/23.0.9/com/vaadin/flow/component/upload/receivers/FileData.html
And if I run getFileName() from FileData after getting it from the MemoryBuffer I see that the recently uploaded file is there.
https://vaadin.com/api/platform/23.0.9/com/vaadin/flow/component/upload/receivers/MemoryBuffer.html
This is the full code.
import com.vaadin.flow.component.upload.Upload;
import com.vaadin.flow.component.upload.receivers.FileData;
import com.vaadin.flow.component.upload.receivers.MemoryBuffer;
public MainView() {
MemoryBuffer memoryBuffer = new MemoryBuffer();
Upload midiFileUpload = new Upload(memoryBuffer);
midiFileUpload.setDropLabel(new Label("Upload a file in .mid format"));
midiFileUpload.addSucceededListener(event -> {
InputStream inputFileData = memoryBuffer.getInputStream();
String fileName = event.getFileName();
long contentLength = event.getContentLength();
String mimeType = event.getMIMEType();
FileData fileData = memoryBuffer.getFileData();
try {
File midiFile = fileData.getFile();
} catch (UnsupportedOperationException uoe) {
System.out.println("OutputBuffer is not an UploadOutputStream.");
uoe.printStackTrace();
} catch (NullPointerException npe) {
System.out.println("Empty buffer.");
npe.printStackTrace();
}
});
}
As the name implies, MemoryBuffer stores the uploaded file in memory, so it can't provide a java.io.File, only an InputStream to read the data from. If you want Upload to use a (temporary!) file, use a FileBuffer instead.
I don't know why this issue is happening but I solved it just changing from MemoryBuffer to FileBuffer class. Now it works.

How to parse a big rdf file in rdf4j

I want to parse a huge file in RDF4J using the following code but I get an exception due to parser limit;
public class ConvertOntology {
public static void main(String[] args) throws RDFParseException, RDFHandlerException, IOException {
String file = "swetodblp_april_2008.rdf";
File initialFile = new File(file);
InputStream input = new FileInputStream(initialFile);
RDFParser parser = Rio.createParser(RDFFormat.RDFXML);
parser.setPreserveBNodeIDs(true);
Model model = new LinkedHashModel();
parser.setRDFHandler(new StatementCollector(model));
parser.parse(input, initialFile.getAbsolutePath());
FileOutputStream out = new FileOutputStream("swetodblp_april_2008.nt");
RDFWriter writer = Rio.createWriter(RDFFormat.TURTLE, out);
try {
writer.startRDF();
for (Statement st: model) {
writer.handleStatement(st);
}
writer.endRDF();
}
catch (RDFHandlerException e) {
}
finally {
out.close();
}
}
The parser has encountered more than "100,000" entity expansions in this document; this is the limit imposed by the application.
I execute my code as following as suggested on the RDF4J web site to set up the two parameters (as in the following command)
mvn -Djdk.xml.totalEntitySizeLimit=0 -DentityExpansionLimit=0 exec:java
any help please
The error is due to the Apache Xerces XML parser, rather than the default JDK XML parser.
So Just delete Xerces XML folder from you .m2 repository and the code works fine.

Can not read file when run within jar file

I have an akka http service. I simply return the api documentation for a get request. The documentation is in html file.
It all works fine when run within the IDE. When I package it as a jar I get error 'resource not found'. I am not sure why it can not read the html file when hosted in a jar and works fine when in IDE.
Here is the code for the route.
private Route topLevelRoute() {
return pathEndOrSingleSlash(() -> getFromResource("asciidoc/html/api.html"));
}
The files are located in resource path.
I have got this working now.
I am doing this.
private Route topLevelRoute() {
try {
InputStreamReader inputStreamReader = new InputStreamReader(getClass().getResourceAsStream("/asciidoc/html/api.html"));
BufferedReader bufferedReader = new BufferedReader(inputStreamReader);
//Get the stream input into string builder
reader.lines().forEach(s -> strBuild.append(s));
inputStreamReader.close();
bufferedReader.close();
//pass the string builder as string with contenttype set to html
complete(HttpEntities.create(ContentTypes.TEXT_HTML_UTF8, strBuild.toString()))
} catch (Exception ex) {
//Catch any exception here
}
}

pdf file couldn't load on deployment but it can work succesfull on local

I need your help, I'm in trouble.
I have to do one report on my project. I did it and it works good. I send request from browser and my java code generate report via jasper add response and chrome show it.
Our system engineer deploy project on last day. My report file(.pdf) couldn't load. We don't understand why? Chrome says "Failed load pdf file", Firefox and IE show bytecodes on notepad. We look to log file but no error, no exception.
Here is my java report code:
#Path("print")
#GET
#UnitOfWork
#Produces("application/pdf")
public Response print(#Context HttpServletResponse response,
#QueryParam("mbsFileOid") String mbsFileOid,
#QueryParam("followAfterDate") Date followAfterDate,
#QueryParam("fileOid") long fileOid,
#Context UriInfo allUri) throws IOException{
OutputStream stream = response.getOutputStream();
List<Commitment> commitments = commitmentDao.findAllCommitmensByFileOid(""+fileOid);
List<CommitmentsReportValue> commitmentsReportValues = new ArrayList<>();
DateFormat sdf = new SimpleDateFormat("dd/MM/yyyy");
for (Commitment commitment : commitments) {
CommitmentsReportValue commitmentsReportValue = new CommitmentsReportValue();
commitmentsReportValue.setAmount(commitment.getAmount().toString());
commitmentsReportValue.setSortId(""+commitment.getSortId());
commitmentsReportValue.setCommitmentDate(sdf.format(commitment.getCommitmentDate()));
commitmentsReportValues.add(commitmentsReportValue);
if(commitment.getCommitmentStatus() == Commitment.CommitmentStatus.NotPaid)
commitmentsReportValue.setStatus("Ödenmedi");
else
commitmentsReportValue.setStatus("Ödendi");
}
CoverCalculationDTO coverCoverCalculationDTO = mbsFileDao.getCoverCalculation(mbsFileOid,followAfterDate);
coverCoverCalculationDTO.setFollowTotalInterest(""+coverCoverCalculationDTO.getFollowAfterTotalInterest());
coverCoverCalculationDTO.setTotalBalance(""+coverCoverCalculationDTO.getLastTotalBalance());
MbsFile mbsFile = mbsFileDao.findById(mbsFileOid);
try {
buildAggregateReport(mbsFile,coverCoverCalculationDTO,commitmentsReportValues).toPdf(stream);
} catch (DRException e) {
LOGGER.info(e.getMessage());
}
response.addHeader("Content-Disposition", "inline; filename=\"rapor.pdf\"");
stream.flush();
stream.close();
return Response.ok().build();
And here is the screen shot from my local server:
And here is the ss from deployment on chrome:
And here is ss from deployment on firefox:
It sounds to me ridiculous. It works perfect on my local computer but didn't work on deployment and no errors no execptions.
Could you help me please?

FTP exception 501 "pathname" more than 8 characters

I am trying to access a file via a URI using the FTP protocol. For obvious security reasons I had to make some changes but this is where the problems seem to be coming from.
My URI is as follows:
ftp://user:pasword#host.net/u/Bigpathname/XYZ/ABC/BigPathname/bigpathname/xyz/abc/MY_LOG.LOG
And I see this exception:
sun.net.ftp.FtpProtocolException: CWD Bigpathname:501 A qualifier in "Bigpathname" is more than 8 characters
This is really confusing as I can access the file from a Windows 7 command line with the CD command just fine. Both one directory at a time and as a full path.
I found one article mentioning that MVS file names must be 8 or fewer characters but this does not explain how I can get to these same files from my command line! They do exist there is data there that I can download manual but I can not get there via a URI in Java.
PS I use .toURL().openStream() to get files on my local machine just fine, it only fails when I try to get them from my server.
EDIT October 1st
I am able to access files on the MVS host using FileZilla and the basic FTP client from the Windows 7 command line - but I still cannot get them from a URI/URL. I downloaded a very basic Java built FTP client and tried accessing the same file in my program from there and the path works but because my file name has a dot in it "MY_LOG.LOG" I am getting File does not exist 501 Invalid data set name "MY_LOG.LOG". Use MVS Dsname conventions. I am utterly perplexed by this...
EDIT Ocotober 1st afternoon :)
OK I finally got it to work with a FTP client in my Java code - but I still want to use the URL class as I have logs on both local and remote machines. Is there a way to encode a URL string so that it can retrieve a file from a remote machine with the FTP protocol? I am not sure how it works in the Java URL class but in the FTP client I had to use the CWD and then the RETR command.
If I can do this then I have one solution for getting all my logs, otherwise I will have to detect if it is a file or ftp URL and then behave differently. Not the end of the world but not what I want...
The code that tries to get the file with just a URL is as follows: (sysc is a valid host)
void testFTP()
{
String ftp = "ftp://user:pword#sysc/u/Xxxxxxxxxx/ICS/YT7/XxxxxXxxxxxxx/xxxxxxxxx/logs/xxxxxxxx/XX_YT.LOG";
try
{
URI uri = new URI(ftp);
URL ftpFile = uri.toURL();
BufferedReader in = new BufferedReader(new InputStreamReader(ftpFile.openStream()));
String inputLine;
while ((inputLine = in.readLine()) != null)
System.out.println(inputLine);
in.close();
}
catch(Exception e)
{
e.printStackTrace();
}
}
In this case I think the problem is also Server Related, It all works fine for me with Filezilla Server except when the filename length(including directories) exceeds 255 chars but if you want to use the URL class with another FTP you must override or implement your own URLStreamHandlerFactory.
URL.setURLStreamHandlerFactory(...);
I haven't found any for my favorite java FTP Client witch is Apache one so I have developed one but may need a few touch ups.
package net.custom.streamhandler.apacheftp;
import java.io.IOException;
import java.io.InputStream;
import java.net.SocketException;
import java.net.URL;
import java.net.URLConnection;
import java.net.URLStreamHandler;
import java.net.URLStreamHandlerFactory;
import org.apache.commons.net.ftp.FTPClient;
import org.apache.commons.net.ftp.FTPReply;
public class ApacheURLStreamHandlerFactory implements URLStreamHandlerFactory {
public URLStreamHandler createURLStreamHandler(String protocol) {
//this will only override the chosen protocol
if ( protocol.equalsIgnoreCase("ftp") )
return new CustomHandler();
else
return null;
}
}
class CustomHandler extends URLStreamHandler {
protected URLConnection openConnection(URL url)
throws IOException {
return new CustomURLConnection(url);
}
}
class CustomURLConnection extends URLConnection {
int reply;
FTPClient ftp = new FTPClient();
InputStream in;
static int defaultPort = 21;
static String defaultPath = "/";
CustomURLConnection ( URL url)
throws IOException {
super( url );
}
synchronized public void connect() throws IOException {
try {
int port;
if ((port = url.getPort()) == -1 )
port = defaultPort;
ftp.connect(url.getHost(), port);
String login = "anonymous";
String password = "";
if(url.getAuthority().indexOf(':')>-1 &&
url.getAuthority().indexOf('#')>-1){
String []auxArray = url.getAuthority().replaceAll("#", ":").split(":");
login = auxArray[0];
password = auxArray[1];
}
ftp.login(login, password);
reply = ftp.getReplyCode();
if (FTPReply.isPositiveCompletion(reply)) {
System.out.println("Connected Apache Success");
} else {
System.out.println("Connection Apache Failed");
ftp.disconnect();
}
in = ftp.retrieveFileStream(url.getFile());
} catch (SocketException ex) {
ex.printStackTrace();
} catch (IOException ex) {
ex.printStackTrace();
}
connected = true;
}
synchronized public InputStream getInputStream()
throws IOException {
if (!connected)
connect();
return ( in );
}
}
*Keep in mind that you can implement new ways to handle different protocols for the java.net.URL this way.
Your code...
...
{
String ftp = "ftp://user:pword#sysc/u/Xxxxxxxxxx/ICS/YT7/XxxxxXxxxxxxx/xxxxxxxxx/logs/xxxxxxxx/XX_YT.LOG";
try
{
URL.setURLStreamHandlerFactory(new ApacheURLStreamHandlerFactory());
...
G'Bye
**(To err is human, to forgive is divine)
Try using the short name for the path. Something like /U/BIGPAT~1/XYZ/ABC/BIGPAT~1/BIGPAT~1/XYZ/ABC/MY_LOG.LOG
You can find the short name for any directory longer than 8 characters with dir /x.
FTP clients are notoriously difficult to write given the variation of (and bugs in) server implementations.
I'm betting that MVS is not completely supported by sun.net.ftp.FtpClient, which is the class used under the hood when you call URL.openStream on an FTP URL.
The Apache Commons Net library should support MVS, but it sounds like you already found a working client.
Have you considered using an RMI for transporting the files that way you can give a direct path to the file as a parameter without the use of ftp then have the file sent back in a byte array.

Categories

Resources