.contains only works if hardcoded - java

This is my first post, so please let me know if something should be different.
I've been trying to create a method that finds the index of a search term in a two dimensional ArrayList. This is the code I came up with:
import java.util.List;
public class Searcher {
public static int Search(List<List<String>> csv, String term) throws TermNotFoundException{
if (csv.get(0).contains(term)) {
return csv.get(0).indexOf(term);
}
else {
throw new TermNotFoundException("Term not found");
}
}
The problem I have right now is that when I hardcode a search term that occurs in the ArrayList I'm looking at, it works perfectly. The problem occurs when I try to use the term variable as shown above.
the specific ArrayList I'm looking at ( csv.get(0) ) is as follows:
[datetime_UTC, E1A, E1B, E1C, E2A, E2B, E3A, E3B, E3C, E3D, E4A, G1A, G2A, G2C, Zon]
As such, if I hardcode in "E1A", It'll find it and return 1. This doesn't work if I call the function in the main method and filling in the same thing for the variable term.
Is there something I'm missing?
EDIT: To elaborate, I cannot disclose the full two dimensional array due to privacy reasons. I can, however, show you more info.
Some suggested to not search twice, so I have changed the code as follows:
import java.util.List;
public class Searcher {
public static int Search(List<List<String>> csv, String term) throws TermNotFoundException{
System.out.println(csv.get(0));
System.out.println(term);
int result = csv.get(0).indexOf(term);
if (result != -1){
return result;
}
else {
throw new TermNotFoundException("Term not found");
}
}
The same problem occurs. Included are some debugging lines, here is the output:
[datetime_UTC, E1A, E1B, E1C, E2A, E2B, E3A, E3B, E3C, E3D, E4A, G1A, G2A, G2C, Zon]
E1A
TermNotFoundException: Term not found
at Searcher.Search(Searcher.java:12)
at Main.main(Main.java:10)
if it is any help. this is where I'm calling the function from:
import java.io.FileNotFoundException;
import java.util.List;
public class Main {
public static void main(String[] args) throws FileNotFoundException {
List<List<String>> csv = CSVReader.Read("standard_profiles.csv");
try {
System.out.println(Searcher.Search(csv, "E1A"));
} catch (TermNotFoundException e) {
e.printStackTrace();
}
}

public static void main(String[] args) throws Exception {
List<String> list=new ArrayList<>();
list.add("000");
List<List<String>> listList=new ArrayList<>();
listList.add(list);
System.out.println(Search(listList, "000"));;
}
The above code returns 0, I wasn't able to reproduce your problem. However it won't work if your term isn't on the first list as .get(0)means you are only searching the first List in your List<List>

Related

Calculating physico-chemical properties of amino acids in Biojava

I need to calculate the number and percentages of polar/non-polar, aliphatic/aromatic/heterocyclic amino acids in this protein sequence that I got from UNIPROT, using BioJava.
I have found in the BioJava tutorial how to read the Fasta files and implemented this code. But I have no ideas how to solve this problem.
If you have some ideas please help me.
Maybe there are some sources where I can check it.
This is the code.
package biojava.biojava_project;
import java.net.URL;
import org.biojava.nbio.core.sequence.ProteinSequence;
import org.biojava.nbio.core.sequence.io.FastaReaderHelper;
public class BioSeq {
// Inserting the sequence from UNIPROT
public static ProteinSequence getSequenceForId(String uniProtId) throws Exception {
URL uniprotFasta = new URL(String.format("https://rest.uniprot.org/uniprotkb/P31574.fasta", uniProtId));
ProteinSequence seq = FastaReaderHelper.readFastaProteinSequence(uniprotFasta.openStream()).get(uniProtId);
System.out.printf("id : P31574", uniProtId, seq, System.getProperty("line.separator"), seq.getOriginalHeader());
System.out.println();
return seq;
}
public static void main(String[] args) {
try {
System.out.println(getSequenceForId("P31574"));
} catch (Exception e) {
e.printStackTrace();
}
}
}
I don't know if BioJava stores these properties anywhere. But it's easy to just list all the amino acids with their properties manually. Then iterate over the sequence and count those that satisfy the property. Here's an example for the polarity:
import java.io.InputStream;
import java.net.URL;
import java.util.Set;
import org.biojava.nbio.core.sequence.ProteinSequence;
import org.biojava.nbio.core.sequence.compound.AminoAcidCompound;
import org.biojava.nbio.core.sequence.io.FastaReaderHelper;
public class BioSeq {
public static void main(String[] args) throws Exception {
ProteinSequence seq = loadFromUniprot("P31574");
int polarCount = numberOfOccurrences(seq, /*Polar AAs:*/ Set.of("Y", "S", "T", "N", "Q", "C"));
System.out.println("% of polar AAs: " + ((double)polarCount)/seq.getLength());
}
public static ProteinSequence loadFromUniprot(String uniProtId) throws Exception {
URL uniprotFasta = new URL(String.format("https://rest.uniprot.org/uniprotkb/%s.fasta", uniProtId));
try (InputStream is = uniprotFasta.openStream()) {
return FastaReaderHelper.readFastaProteinSequence(is).get(uniProtId);
}
}
private static int numberOfOccurrences(ProteinSequence seq, Set<String> bases) {
int count = 0;
for (AminoAcidCompound aminoAcid : seq)
if(bases.contains(aminoAcid.getBase()))
count++;
return count;
}
}
PS: don't forget to close IO streams after you used them. In the example above I used try-with-resources syntax which automatically closes the InputStream.

How to pass a file name as parameter, create and then read the file

I have a method as follows:
public(String input_filename, String output_filename)
{
//some content
}
how to create an input_filename at run time and read the input_filename .I have to pass input_filename as a parameter
Please be patient as I am new to Java
Here a complete sample:
Save it as Sample.java
compile it with: javac Sample.java
run it with: java Sample "in.txt" "out.txt"
or: java Sample
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Paths;
public class Sample {
public static void main(String[] args) throws IOException {
if(args.length == 2)
{
doFileStuff(args[0],args[1]);
}
else {
doFileStuff("in.txt","out.txt");
}
}
public static void doFileStuff(String input_filename, String output_filename) throws IOException {
if(!Files.exists(Paths.get(input_filename)))
{
System.err.println("file not exist: " + input_filename);
return;
}
if(!Files.exists(Paths.get(output_filename)))
{
System.err.println("file still exist, do not overwrite it: " + output_filename);
return;
}
String content = new String(Files.readAllBytes(Paths.get(input_filename)));
content += "\nHas added something";
Files.write(Paths.get(output_filename), content.getBytes(StandardCharsets.UTF_8));
}
}
I'm unsure what you want to do with this method, but I hope this can help you a bit.
If you want inputs during runtime, use the Scanner class. A guide on how to use it here
Also if you want an output in your class you should use "return", and not have it as a parameter.
Do note that you haven't named your class yet, or specified the output type.
How it could look:
public String className(String input){
return input;
}

Getting sub links of a URL using jsoup

Consider a URl www.example.com it may have plenty numbers of links ,some may be internal and other may be external.I want to get a list of all the sub links ,not even the sub-sub links but only sub link.
E.G if there are four links as follows
1)www.example.com/images/main
2)www.example.com/data
3)www.example.com/users
4)www.example.com/admin/data
Then out of the four only 2 and 3 are of use as they are sub links not the sub-sub and so on links .Is there a way to achieve it through j-soup..If this could not be achieved through j-soup then one can introduce me with some other java API.
Also note that it should be a link of the parent Url which is initially sent(i.e. www.example.com)
If i can understand a sub-link can contain one slash you can attempt with this with counting the number of slashes for example :
List<String> list = new ArrayList<>();
list.add("www.example.com/images/main");
list.add("www.example.com/data");
list.add("www.example.com/users");
list.add("www.example.com/admin/data");
for(String link : list){
if((link.length() - link.replaceAll("[/]", "").length()) == 1){
System.out.println(link);
}
}
link.length(): count the number of characters
link.replaceAll("[/]", "").length() : count the number of slashes
If the difference equal to one then right link else no.
EDIT
How will i scan the whole website for sub links?
The answer for this with the robots.txt file or Robots exclusion standard, so in this it define all the sub-links of the web site for example https://stackoverflow.com/robots.txt, so the idea is, to read this file and you can extract the sub-links from this web-site here is a piece of code that can help you :
public static void main(String[] args) throws Exception {
//Your web site
String website = "http://stackoverflow.com";
//We will read the URL https://stackoverflow.com/robots.txt
URL url = new URL(website + "/robots.txt");
//List of your sub-links
List<String> list;
//Read the file with BufferedReader
try (BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream()))) {
String subLink;
list = new ArrayList<>();
//Loop throw your file
while ((subLink = in.readLine()) != null) {
//Check if the sub-link is match with this regex, if yes then add it to your list
if (subLink.matches("Disallow: \\/\\w+\\/")) {
list.add(website + "/" + subLink.replace("Disallow: /", ""));
}else{
System.out.println("not match");
}
}
}
//Print your result
System.out.println(list);
}
This will show you :
[https://stackoverflow.com/posts/, https://stackoverflow.com/posts?,
https://stackoverflow.com/search/, https://stackoverflow.com/search?,
https://stackoverflow.com/feeds/, https://stackoverflow.com/feeds?,
https://stackoverflow.com/unanswered/,
https://stackoverflow.com/unanswered?, https://stackoverflow.com/u/,
https://stackoverflow.com/messages/, https://stackoverflow.com/ajax/,
https://stackoverflow.com/plugins/]
Here is a Demo about the regex that i use.
Hope this can help you.
To scan the links on the web page you can use JSoup library.
import java.io.IOException;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
class read_data {
public static void main(String[] args) {
try {
Document doc = Jsoup.connect("**your_url**").get();
Elements links = doc.select("a");
List<String> list = new ArrayList<>();
for (Element link : links) {
list.add(link.attr("abs:href"));
}
} catch (IOException ex) {
}
}
}
list can be used as suggested in the previous answer.
The code for reading all the links on a website is given below. I have used http://stackoverflow.com/ for illustration. I would recommend you to go through company's terms of use before scraping it's website.
import java.io.IOException;
import java.util.HashSet;
import java.util.Set;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.select.Elements;
public class readAllLinks {
public static Set<String> uniqueURL = new HashSet<String>();
public static String my_site;
public static void main(String[] args) {
readAllLinks obj = new readAllLinks();
my_site = "stackoverflow.com";
obj.get_links("http://stackoverflow.com/");
}
private void get_links(String url) {
try {
Document doc = Jsoup.connect(url).get();
Elements links = doc.select("a");
links.stream().map((link) -> link.attr("abs:href")).forEachOrdered((this_url) -> {
boolean add = uniqueURL.add(this_url);
if (add && this_url.contains(my_site)) {
System.out.println(this_url);
get_links(this_url);
}
});
} catch (IOException ex) {
}
}
}
You will get list of all the links in uniqueURL field.

"Attributes and objects cannot be resolved" - error

The following code is for reading or writing files with java, but:
Eclipse prints these errors:
buffer_1 cannot be resolved to a variable
file_reader cannot be resolved
also other attributes...
what is wrong in this code here:
//Class File_RW
package R_2;
import java.io.File;
import java.io.FileReader;
import java.io.FileNotFoundException;
import java.lang.NullPointerException;
public class File_RW {
public File_RW() throws FileNotFoundException, NullPointerException {
File file_to_read = new File("C:/myfiletoread.txt");
FileReader file_reader = new FileReader(file_to_read);
int nr_letters = (int)file_to_read.length()/Character.BYTES;
char buffer_1[] = new char[nr_letters];
}
public void read() {
file_reader.read(buffer_1, 0, nr_letters);
}
public void print() {
System.out.println(buffer_1);
}
public void close() {
file_reader.close();
}
public File get_file_to_read() {
return file_to_read;
}
public int get_nr_letters() {
return nr_letters;
}
public char[] get_buffer_1() {
return buffer_1;
}
//...
}
//main method # class Start:
package R_2;
import java.io.File;
import java.io.FileReader;
import java.io.FileNotFoundException;
import java.lang.NullPointerException;
public class Start {
public static void main(String[] args) {
File_RW file = null;
try {
file = new File_RW();
} catch (NullPointerException e_1) {
System.out.println("File not found.");
}
//...
}
}
I can't find any mistake. I have also tried to include a try catch statement into the constructor of the class "File_RW", but the error messages were the same.
Yes, there are errors in your code - which are of really basic nature: you are declaring variables instead of fields.
Meaning: you have them in the constructor, but they need to go one layer up! When you declare an entity within a constructor or method, then it is a variable that only exists within that constructor/method.
If you want that multiple methods can make use of that entity, it needs to be a field, declared in the scope of the enclosing class, like:
class FileRW {
private File fileToRead = new File...
...
and then you can use your fields within all your methods! Please note: you can do the actual setup within your constructor:
class FileRW {
private File fileToRead;
public FileRW() {
fileToRead = ..
but you don't have to.
Finally: please read about java language conventions. You avoid using "_" within names (just for SOME_CONSTANT)!
javacode already running...thx
same program edited with c++ in visual Studio express...
visit the stackoverflow entry link:
c++ file read write-error: Microsoft Visual C++ Runtime libr..debug Assertion failed, expr. stream.valid()

Polarion WorkItem class getter methods returning null

I am attempting to write a Java program that gets work record information off of Polarion and writes it to a DAT file for later use.
I have successfully connected to our servers and have retrieved the WorkItem objects, but none of the getter methods (besides getUri()) seem to work, which poses a problem since I need to use the WorkItem class's getWorkRecords() method to satisfy the requirements of the project.
I have tried all of the getter methods for the class on both our main Polarion server and our 'staging' server, which we use as a kind of testing area for things such as the program I am trying to write and on which I have full permissions.
Regardless of permissions, I am only querying for some dummy workitems I created and assigned to myself, so there shouldn't be any permissions issues since I am only attempting to view my own workitems.
Here is the code for the program:
package test;
//stg= 10.4.1.50
//main= 10.4.1.10
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.net.MalformedURLException;
import java.rmi.RemoteException;
import java.util.ArrayList;
import javax.xml.rpc.ServiceException;
import com.polarion.alm.ws.client.WebServiceFactory;
import com.polarion.alm.ws.client.session.SessionWebService;
import com.polarion.alm.ws.client.tracker.TrackerWebService;
import com.polarion.alm.ws.client.types.tracker.WorkItem;
import com.polarion.alm.ws.client.types.tracker.WorkRecord;
public class WorkrecordImporter {
private WebServiceFactory factory;
private TrackerWebService trackerService;
private SessionWebService sessionService;
private WorkItem[] workItems;
private ArrayList<WorkRecord> workRecords;
private String password = //insertpasswordhere;//no peaking
public WorkrecordImporter()throws ServiceException, IOException, ClassNotFoundException{
initializeFields();//initializes all of the Web Services and arrays
//step one
getWorkItems();
//readDATFile();
//step two
getWorkRecords();
//$$$
printWorkRecords();
//$$$$$
writeDATFile();
}
//you know what this does.
public void printWorkRecords(){
for(int temp = 0; temp < workItems.length; temp++){
System.out.println(workItems[temp].getUri().toString());
}
}
public void writeDATFile() throws IOException{
ObjectOutputStream out = new ObjectOutputStream(new FileOutputStream("C:\\Users\\sweidenkopf\\workspace\\test\\filename.dat"));
try {
out.writeObject(workRecords);
} finally {
// Make sure to close the file when done
out.close();
}
}
/**
* This method sets up the WebServiceFactory at the specified URL. It then initializes the web services, logs in the
* session service, and initializes the arrays.
* #throws MalformedURLException
* #throws ServiceException
* #throws RemoteException
*/
public void initializeFields() throws MalformedURLException, ServiceException, RemoteException{
factory = new WebServiceFactory("//insert url here");
trackerService = factory.getTrackerService();
sessionService = factory.getSessionService();
sessionService.logIn("sweidenkopf", password);
workRecords = new ArrayList<>();
}
public void getWorkItems()throws MalformedURLException, ServiceException, RemoteException{
sessionService.beginTransaction();
workItems = trackerService.queryWorkItems("workRecords.user.id:sweidenkopf", null, null);
sessionService.endTransaction(false);
}
public void getWorkRecords()throws MalformedURLException, ServiceException, RemoteException{
sessionService.beginTransaction();
for(int k = 0; k < workItems.length; k++)
{System.out.println("This is working");
try{//not every work item has work records
System.out.println(workItems[k].getWorkRecords());
WorkRecord[] temp;
temp = workItems[k].getWorkRecords();
for(int x = 0; x < temp.length; x++){
System.out.println("This is working fine");
workRecords.add(temp[x]);
}
}catch(NullPointerException e){
System.out.println("I must regretfully inform you that I have grave news; your endeavors have not been successfull.");
continue;
}
}
System.out.println(workRecords.toString());
sessionService.endTransaction(false);
}
public void readDATFile() throws FileNotFoundException, IOException, ClassNotFoundException{
ObjectInputStream in = new ObjectInputStream(new FileInputStream("C:\\Users\\sweidenkopf\\workspace\\test\\filename.dat"));
try{
Object temp = in.readObject();
workRecords = (ArrayList) temp;
}
finally{
in.close();
}
}
}
The most important part is of course the getWorkRecords() method within my code. As you can see, it contains the statement System.out.println(workItems[k].getWorkRecords()); that I am using for debugging purposes. This statement returns null, and the only WorkItem method that does not return null when substituted in that statement is getUri(). Also, the try-catch block in that method always catches a NullPointerException because of the for loop contains temp.length, temp being a variable that should contain the return of the getWorkRecords() method.
To summarize the main issue here is that I am unable to return anything from getWorkRecords() or any other getter methods from the WorkItem class. This is puzzling because the getUri() method is executing successfully, as the printWorkRecords() method from my code successfully prints the URIs of all of the WorkItem objects erturn from my query.
Are there any Polarion experts that have encountered this issue before? Does anyone know what I am doing wrong? I am inclined to think it is a bug based on the circumstances.
If you look at my calling of the queryWorkItems() method, you will notice that after the query parameter I specify two null parameters. The first specifies how you want the returned work items sorted (which is inconsequential at the moment), but the second is a String array called fields that is used to specify which of the WorkItem fields you want returned along with the WorkItems themselves. Apparently, if you set that to null like I did, it defaults to only returning the URI. For other things, like author, workrecord, and type, you have to place them in the String array and and pass that array when you call the method.

Categories

Resources