JavaScriptException when using DataTable.create() in JUnit with GWTTestCase - java

For a school project, I have to test one of my classes. After writing and executing the test, I got the following error message and stacktrace:
com.google.gwt.core.client.JavaScriptException: (null)
#com.google.gwt.visualization.client.DataTable::create()([]): null
at com.google.gwt.dev.shell.BrowserChannelServer.invokeJavascript(BrowserChannelServer.java:249)
at com.google.gwt.dev.shell.ModuleSpaceOOPHM.doInvoke(ModuleSpaceOOPHM.java:136)
at com.google.gwt.dev.shell.ModuleSpace.invokeNative(ModuleSpace.java:576)
at com.google.gwt.dev.shell.ModuleSpace.invokeNativeObject(ModuleSpace.java:284)
at com.google.gwt.dev.shell.JavaScriptHost.invokeNativeObject(JavaScriptHost.java:91)
at com.google.gwt.visualization.client.DataTable$.create(DataTable.java)
at com.gwt.client.VizualizationManagerTest.gwtSetUp(VizualizationManagerTest.java:150)
...
From what I read, the problem is with the method: DataTable.create(), which creates a default DataTable that may later be filled with columns and rows.
I want to test if the data is being set up correctly. In my class, I am testing to verify that the initial data is being reformatted so that it can be used for GWT Graphs. So basically I take a very large DataTable and convert it into an ArrayList of smaller DataTables. But as far as I know, my test didn't even make it as far as testing that specific method, but got stuck in the: #Before method: gwtSetUp() on DataTable.create().
Here is the code of my test:
package com.gwt.client;
import static org.junit.Assert.*;
import java.util.Random;
import org.junit.Before;
import org.junit.Test;
import com.google.gwt.core.client.JavaScriptObject;
import com.google.gwt.junit.client.GWTTestCase;
import com.google.gwt.user.client.ui.Label;
import com.google.gwt.visualization.client.AbstractDataTable;
import com.google.gwt.visualization.client.DataTable;
import com.google.gwt.visualization.client.AbstractDataTable.ColumnType;
public class VizualizationManagerTest extends GWTTestCase {
#Test
public void testPrepareData() {
VisualizationManager.TableDATA = (DataTable) data;
VisualizationManager.prepareData();
assertEquals(data.getNumberOfColumns(), VisualizationManager.DATA.length);
assertEquals(data.getNumberOfRows(), VisualizationManager.DATA[VisualizationManager.DATA.length-1].getNumberOfRows());
//check all Cells for equality
for (int i = 0; i < data.getNumberOfColumns()-1; i++) {
for (int j = 0; j < data.getNumberOfRows(); j++) {
String test1 = VisualizationManager.DATA[i].getFormattedValue(j, 0);
String tested1 = data.getFormattedValue(j, i+1);
assertEquals(tested1, test1);
}
}
}
public AbstractDataTable data;
#Before
public void gwtSetUp() {
DataTable mydata = DataTable.create();
mydata.addColumn(ColumnType.STRING, "Country");
mydata.addColumn(ColumnType.NUMBER, "2011");
mydata.addColumn(ColumnType.NUMBER, "2010");
mydata.addColumn(ColumnType.NUMBER, "2009");
mydata.addRows(5);
mydata.setCell(0, 0, "Switzerland", "Switzerland", null);
mydata.setCell(1, 0, "Germany", "Germany", null);
mydata.setCell(2, 0, "Austria", "Austria", null);
mydata.setCell(3, 0, "Slovakia", "Slovakia", null);
mydata.setCell(4, 0, "Czech Republic", "Czech Republic", null);
Random random = new Random();
for (int i=1; i<4; i++) {
for (int j=0; j<5; j++) {
double number = random.nextDouble() % 1000.0;
mydata.setCell(j, i, number, Double.toString(number), null);
}
}
data = mydata;
}
#Override
public String getModuleName() {
return "com.gwt.AgrarAlpha_v1";
}
}
Unfortunately, I wasn't able to find out how to fix the problem. I can't tell if it has to do with the set up of the whole test or just that I can't test with the class DataTable, or maybe something different...
I am using Eclipse Luna with Java 7 update 71 on OS X Yosemite (10.10).
Thank you for your help!!
Cheers, Romi

The problem with all GWT Visualization stuff (in the actual code and in JUnit and GWTtestCase) is that it needs to be reloaded on the web.
This means, that everything that has anything to do with GWT Visualization needs to be written the following way:
class Test extends GWTTestCase{
#Test
public void testSomething{
Runnable onLoadCallback = new Runnable() {
public void run() {
...
}
};
VisualizationUtils.loadVisualizationApi(onLoadCallback, Table.PACKAGE);
}
}

Related

Replacing every block in a chunk crashes server

I am replacing blocks in chunks. Every time a chunk is loaded, I replace generating blocks with a random other one. So here is my code
package de.belinked.chunkrandomizer;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.ThreadLocalRandom;
import org.bukkit.Bukkit;
import org.bukkit.Chunk;
import org.bukkit.Material;
import org.bukkit.block.Block;
import org.bukkit.event.EventHandler;
import org.bukkit.event.Listener;
import org.bukkit.event.world.ChunkLoadEvent;
import org.bukkit.plugin.java.JavaPlugin;
public class ChunkRandomizer extends JavaPlugin implements Listener {
public List<Material> blocks = Arrays.asList(
Material.ACACIA_LEAVES,
Material.ACACIA_LOG,
// I'll leave this out, just every full, solid block
Material.YELLOW_STAINED_GLASS,
Material.YELLOW_TERRACOTTA,
Material.YELLOW_WOOL
);
#Override
public void onEnable() {
getServer().getPluginManager().registerEvents(this, this);
Bukkit.broadcastMessage(this.prefix + "Der Chunk-Randomizer wurde erfolgreich geladen");
}
#Override
public void onDisable() {
}
public Material getRandomMaterial(List l) {
int rnd = ThreadLocalRandom.current().nextInt(l.size());
Material m = (Material) l.get(rnd);
return m;
}
#EventHandler
public void onChunkLoad(ChunkLoadEvent e) {
if(e.isNewChunk()) {
Chunk chunk = e.getChunk();
Block b;
Material m = getRandomMaterial(this.blocks);
for(int y = -64; y <= 320; y++) {
for(int x = 0; x < 16; x++) {
for(int z = 0; z < 16; z++) {
b = chunk.getBlock(x, y, z);
if(!b.getType().isAir()
&& b.getType() != Material.BEDROCK
&& b.getType() != Material.WATER
&& b.getType() != Material.LAVA
&& b.getType() != Material.END_PORTAL_FRAME
&& b.getType() != Material.END_PORTAL) {
b.setType(m);
}
}
}
}
}
}
}
but when I join the server and load a few chunks then I get this log:
https://pastebin.com/vA8qHSUr
Can anyone help me fix this?
Edit: now I get kicked and for a while nothing happens, then I get this log which is even to long for the console: https://pastebin.com/8eZ4Ja4m
In the error, line 77, it says :
Cannot get data for not block BRICK
I think the problem comes from your list blocks.
Somewhere in it, there must be a Material.BRICK, but in fact, I think the material should be Material.BRICKS, Bricks being the whole block, and Brick being the cooked clay item to make the Bricks block.
You should check that you well have api-version: 1.13 in your plugin.yml.
It's because some mapping names changes, and they are not well detected by spigot without this option.

How do I run a testNG test multiple times, based on an internal logic?

I have gone through some solutions that tells about TestNg DataProvider and InvocationCount, but DataProvider or InvocationCount come in picture even before my #Test method starts. My requirement is, I have a DataReader class which reads data from excel file in the form of key-value pair (keys always in first row and there can be more than one row for values). Suppose if there are 2 rows of values available then I would have to run the same #Test with another set of data (It would be great if I can run #BeforeClass and #AfterClass methods for each iteration of #Test).
Something Like This:
#BeforeClass
//Some Code Here that runs on each iteration of #Test
#Test
public void myTest() {
// make a decision here, based on number of rows of values, run the test multiple times
DataReader.LoadDataSheet("TestData.xlsx", "SheetName");
}
#AfterClass
//Some Code Here that runs on each iteration of #Test
What you need here is a Factory powered data provider.
The first data provider which is bound with the factory method, would provide data that would be used by the test methods for every instance to iterate as many times as required. The data that is first fed by the outer data provider, would be then used by the data provider that would be part of every instance, which would iterate the tests as many times as required.
The below sample should be able to clarify this.
import org.assertj.core.api.Assertions;
import org.testng.ITestResult;
import org.testng.annotations.AfterMethod;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.DataProvider;
import org.testng.annotations.Factory;
import org.testng.annotations.Test;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
public class TestClassSample {
private List<String> data;
#Factory(dataProvider = "getDataForInstances")
public TestClassSample(List<String> data) {
this.data = data;
}
#BeforeMethod
public void beforeMethod(Object[] parameters) {
System.err.println("Printing Parameters before running test method " + Arrays.toString(parameters));
}
#Test(dataProvider = "getData")
public void testMethod(String text) {
System.err.println("Printing Parameters when running test method [" + text + "]");
Assertions.assertThat(text).isNotEmpty();
}
#AfterMethod
public void afterMethod(ITestResult result) {
System.err.println("Printing Parameters after running test method " + Arrays.toString(result.getParameters()));
}
#DataProvider(name = "getData")
public Object[][] getData() {
//This data provider simulates the iterations that every test method has to go through based on
//the outer data provider viz., "getDataForInstances()"
Object[][] iterationData = new Object[data.size()][1];
for (int i = 0; i < data.size(); i++) {
iterationData[i] = new String[]{data.get(i)};
}
return iterationData;
}
#DataProvider(name = "getDataForInstances")
public static Object[][] getDataForInstances() {
//This data provider simulates data being read from excel, wherein it would return the number of
//iterations that every test method should go through.
return new Object[][]{
{Collections.singletonList("Java")},
{Arrays.asList("TestNG", "JUnit")},
{Arrays.asList("Maven", "Gradle", "Ant")}
};
}
}
Here's the output:
Printing Parameters before running test method [Maven]
Printing Parameters when running test method [Maven]
Printing Parameters after running test method [Maven]
Printing Parameters before running test method [Gradle]
Printing Parameters when running test method [Gradle]
Printing Parameters after running test method [Gradle]
Printing Parameters before running test method [Ant]
Printing Parameters when running test method [Ant]
Printing Parameters after running test method [Ant]
Printing Parameters before running test method [TestNG]
Printing Parameters when running test method [TestNG]
Printing Parameters after running test method [TestNG]
Printing Parameters before running test method [JUnit]
Printing Parameters when running test method [JUnit]
Printing Parameters after running test method [JUnit]
Printing Parameters before running test method [Java]
Printing Parameters when running test method [Java]
Printing Parameters after running test method [Java]
===============================================
Default Suite
Total tests run: 6, Failures: 0, Skips: 0
===============================================
I'm not a fancy coder, but here is how i wanted it.
Test Class :
import org.testng.annotations.DataProvider;
import org.testng.annotations.Test;
public class Test2 extends BaseTset {
#Test(dataProvider = "dataprovider")
public void test(DataReader2 reader) {
for (int i = 1; i <= 10; i++) {
System.out.println("Test : " + reader.getValue("Key_" + i));
}
}
#DataProvider(name = "dataprovider")
public DataReader2[] dataProvider() {
String dataFileName = "TestData.xlsx";
DataReader2 reader = new DataReader2(dataFileName);
int rowCount = reader.getRowCount();
DataReader2[] reader2 = new DataReader2[rowCount];
for (int i = 0; i < rowCount; i++) {
int j = i + 1;
reader2[i] = DataReader2.getReader(dataFileName, j);
}
return reader2;
}
}
Data Reader 2 Class :
import java.io.FileInputStream;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import org.apache.poi.ss.usermodel.DataFormatter;
import org.apache.poi.ss.usermodel.FormulaEvaluator;
import org.apache.poi.xssf.usermodel.XSSFCell;
import org.apache.poi.xssf.usermodel.XSSFRow;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
public class DataReader2 {
private Map<String, String> dataSet = new HashMap<String, String>();
private XSSFWorkbook ExcelWBook;
private XSSFSheet ExcelWSheet;
private FormulaEvaluator Evaluator;
private XSSFCell Cell;
private XSSFRow Row;
private int rowCount;
private int columnCount;
public DataReader2(String FileName, int rowNum) {
loadDataFile(FileName);
dataSet = getDataSet(rowNum);
}
public DataReader2(String FileName) {
loadDataFile(FileName);
}
public static DataReader2 getReader(String FileName, int rowNum) {
DataReader2 dataReader = new DataReader2(FileName, rowNum);
return dataReader;
}
private void loadDataFile(String FileName) {
try {
String FilePath = "./data/" + FileName;
FileInputStream ExcelFile = new FileInputStream(FilePath);
ExcelWBook = new XSSFWorkbook(ExcelFile);
Evaluator = ExcelWBook.getCreationHelper().createFormulaEvaluator();
ExcelWSheet = ExcelWBook.getSheetAt(0);
} catch (IOException e) {
e.printStackTrace();
}
}
public int getRowCount() {
rowCount = ExcelWSheet.getLastRowNum();
return rowCount;
}
private Map<String, String> getDataSet(int rowNum) {
DataFormatter formatter = new DataFormatter();
Row = ExcelWSheet.getRow(rowNum);
columnCount = Row.getLastCellNum();
for (int i = 0; i < columnCount; i++) {
Cell = ExcelWSheet.getRow(0).getCell(i);
String key = formatter.formatCellValue(Cell, Evaluator);
Cell = Row.getCell(i);
String value = formatter.formatCellValue(Cell, Evaluator);
dataSet.put(key, value);
}
return dataSet;
}
public String getValue(String key) {
try {
key = key.trim();
String value = dataSet.get(key).trim();
if (!value.isEmpty() && !value.equals(null)) {
return value;
} else {
throw (new Exception("No key with name : " + key + " available in datasheet."));
}
} catch (Exception e) {
e.printStackTrace();
return "";
}
}
}
Base Test Class :
import org.testng.annotations.AfterClass;
import org.testng.annotations.AfterMethod;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.BeforeMethod;
public class BaseTset {
#BeforeClass
public void beforeClass() {
System.out.println("Before class");
}
#BeforeMethod
public void beforeMethod() {
System.out.println("Before method");
}
#AfterMethod
public void AfterMethod() {
System.out.println("After method");
}
#AfterClass
public void afterClass() {
System.out.println("After Class");
}
}
Output :
[RemoteTestNG] detected TestNG version 6.12.0
Before class
Before method
Test : Value_1
Test : Value_2
Test : Value_3
Test : Value_4
Test : Value_5
Test : Value_6
Test : Value_7
Test : Value_8
Test : Value_9
Test : Value_10
After method
Before method
Test : Value_11
Test : Value_12
Test : Value_13
Test : Value_14
Test : Value_15
Test : Value_16
Test : Value_17
Test : Value_18
Test : Value_19
Test : Value_20
After method
Before method
Test : Value_21
Test : Value_22
Test : Value_23
Test : Value_24
Test : Value_25
Test : Value_26
Test : Value_27
Test : Value_28
Test : Value_29
Test : Value_30
After method
Before method
Test : Value_31
Test : Value_32
Test : Value_33
Test : Value_34
Test : Value_35
Test : Value_36
Test : Value_37
Test : Value_38
Test : Value_39
Test : Value_40
After method
After Class
PASSED: test(datareader.DataReader2#186beff)
PASSED: test(datareader.DataReader2#78afa0)
PASSED: test(datareader.DataReader2#1c2959f)
PASSED: test(datareader.DataReader2#19982de)
===============================================
Default test
Tests run: 4, Failures: 0, Skips: 0
===============================================
===============================================
Default suite
Total tests run: 4, Failures: 0, Skips: 0
===============================================

Apache flink pattern conditions with list

I wrote a pattern. I have a list for conditions(gettin rules from json).Data(json) is coming form kafka server . I want to filter the data with this list. But it is not working. How can I do that?
I am not sure about keyedstream and alarms in for. Can flink work like this?
main program:
package cep_kafka_eample.cep_kafka;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.node.ObjectNode;
import com.google.gson.Gson;
import com.google.gson.JsonArray;
import com.google.gson.JsonParser;
import org.apache.flink.cep.CEP;
import org.apache.flink.cep.PatternSelectFunction;
import org.apache.flink.cep.PatternStream;
import org.apache.flink.cep.pattern.Pattern;
import org.apache.flink.streaming.api.TimeCharacteristic;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.windowing.assigners.SlidingProcessingTimeWindows;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010;
import org.apache.flink.streaming.util.serialization.JSONDeserializationSchema;
import util.AlarmPatterns;
import util.Rules;
import util.TypeProperties;
import java.io.FileReader;
import java.util.*;
public class MainClass {
public static void main( String[] args ) throws Exception
{
ObjectMapper mapper = new ObjectMapper();
JsonParser parser = new JsonParser();
Object obj = parser.parse(new FileReader(
"c://new 5.json"));
JsonArray array = (JsonArray)obj;
Gson googleJson = new Gson();
List<Rules> ruleList = new ArrayList<>();
for(int i = 0; i< array.size() ; i++) {
Rules jsonObjList = googleJson.fromJson(array.get(i), Rules.class);
ruleList.add(jsonObjList);
}
//apache kafka properties
Properties properties = new Properties();
properties.setProperty("zookeeper.connect", "localhost:2181");
properties.setProperty("bootstrap.servers", "localhost:9092");
//starting flink
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(1000).setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
//get kafka values
FlinkKafkaConsumer010<ObjectNode> myConsumer = new FlinkKafkaConsumer010<>("demo", new JSONDeserializationSchema(),
properties);
List<Pattern<ObjectNode,?>> patternList = new ArrayList<>();
DataStream<ObjectNode> dataStream = env.addSource(myConsumer);
dataStream.windowAll(SlidingProcessingTimeWindows.of(Time.seconds(10), Time.seconds(5)));
DataStream<ObjectNode> keyedStream = dataStream;
//get pattern list, keyeddatastream
for(Rules rules : ruleList){
List<TypeProperties> typePropertiesList = rules.getTypePropList();
for (int i = 0; i < typePropertiesList.size(); i++) {
TypeProperties typeProperty = typePropertiesList.get(i);
if (typeProperty.getGroupType() != null && typeProperty.getGroupType().equals("group")) {
keyedStream = keyedStream.keyBy(
jsonNode -> jsonNode.get(typeProperty.getPropName().toString())
);
}
}
Pattern<ObjectNode,?> pattern = new AlarmPatterns().getAlarmPattern(rules);
patternList.add(pattern);
}
//CEP pattern and alarms
List<DataStream<Alert>> alertList = new ArrayList<>();
for(Pattern<ObjectNode,?> pattern : patternList){
PatternStream<ObjectNode> patternStream = CEP.pattern(keyedStream, pattern);
DataStream<Alert> alarms = patternStream.select(new PatternSelectFunction<ObjectNode, Alert>() {
private static final long serialVersionUID = 1L;
public Alert select(Map<String, List<ObjectNode>> map) throws Exception {
return new Alert("new message");
}
});
alertList.add(alarms);
}
env.execute("Flink CEP monitoring job");
}
}
getAlarmPattern:
package util;
import org.apache.flink.cep.pattern.Pattern;
import org.apache.flink.cep.pattern.conditions.IterativeCondition;
import org.apache.flink.streaming.api.datastream.DataStream;
import com.fasterxml.jackson.databind.node.ObjectNode;
public class AlarmPatterns {
public Pattern<ObjectNode, ?> getAlarmPattern(Rules rules) {
//MySimpleConditions conditions = new MySimpleConditions();
Pattern<ObjectNode, ?> alarmPattern = Pattern.<ObjectNode>begin("first")
.where(new IterativeCondition<ObjectNode>() {
#Override
public boolean filter(ObjectNode jsonNodes, Context<ObjectNode> context) throws Exception {
for (Criterias criterias : rules.getCriteriaList()) {
if (criterias.getCriteriaType().equals("equals")) {
return jsonNodes.get(criterias.getPropName()).equals(criterias.getCriteriaValue());
} else if (criterias.getCriteriaType().equals("greaterThen")) {
if (!jsonNodes.get(criterias.getPropName()).equals(criterias.getCriteriaValue())) {
return false;
}
int count = 0;
for (ObjectNode node : context.getEventsForPattern("first")) {
count += node.get("value").asInt();
}
return Integer.compare(count, 5) > 0;
} else if (criterias.getCriteriaType().equals("lessThen")) {
if (!jsonNodes.get(criterias.getPropName()).equals(criterias.getCriteriaValue())) {
return false;
}
int count = 0;
for (ObjectNode node : context.getEventsForPattern("first")) {
count += node.get("value").asInt();
}
return Integer.compare(count, 5) < 0;
}
}
return false;
}
}).times(rules.getRuleCount());
return alarmPattern;
}
}
Thanks for using FlinkCEP!
Could you provide some more details about what exactly is the error message (if any)? This will help a lot at pinning down the problem.
From a first look at the code, I can make the following observations:
At first, the line:
dataStream.windowAll(SlidingProcessingTimeWindows.of(Time.seconds(10), Time.seconds(5)));
will never be executed, as you never use this stream in the rest of your program.
Second, you should specify a sink to be taken after the select(), e.g. print() method on each of your PatternStreams. If you do not do so, then your output gets discarded. You can have a look here for examples, although the list is far from exhaustive.
Finally, I would recommend adding a within() clause to your pattern, so that you do not run out of memory.
Error was from my json object. I will fix it. When i am run job on intellij cep doesn't work. When submit from flink console it works.

Unexpected results moving files between folders

I am trying to add a very simple feature to a Java program. The feature I want to add simply moves all the files from two folders to a third "archive" folder. The code is simple and I understand it 100% the problem is only one of the folder's contents is being moved. I have went over the code with a fine-tooth comb and tried repasting the directory several times, nothing seems to work. If anyone could help me figure out why my 2nd folder's contents aren't being moved I would REALLY appreciate it.
FYI in order to test this code you need to add a couple folders to "My Documents".
"Pain008Files", "Camt54 Files" and "archive". Also you just need to add some type of text file to the Pain008 and Camt5 folder, it can only have a random letter just something that can be moved.
At runtime the Pain008Files folder correctly has all it's files moved to the archive folder. The Camt54 Files does not. The only problem I can think of is that perhaps the space in the Camt54 Files name is causing a problem but that doesn't make sense so I thought I would hold off on changing it till I get some help. Thanks in advance!
Main Class
package fileHandling;
public class moveTestMain
{
public static void main(String args[]){
GetUser gUser = new GetUser();
gUser.getUser();
MoveFiles mFiles = new MoveFiles();
mFiles.moveCamtFiles();
mFiles.movePainFiles();
}
}
Gets the user-name class
package fileHandling;
public class GetUser
{
public static String currentUser = null;
public void getUser(){
currentUser = System.getProperty("user.name");
}
}
Move the files class
package fileHandling;
import java.io.File;
import java.nio.file.Files;
import java.util.ArrayList;
import java.util.Arrays;
public class MoveFiles
{
public static ArrayList<File> pain008Files;
public static ArrayList<File> camt54Files;;
public void movePainFiles(){
File pain008File = new File("C:\\Users\\"+GetUser.currentUser+"\\Documents\\Pain008Files");
pain008Files = new ArrayList<File>(Arrays.asList(pain008File.listFiles()));
System.out.println(pain008Files);
for(int i = 0; i < pain008Files.size(); i++){
System.out.println("Test");
int cutAmount = GetUser.currentUser.length();
String fileName = pain008Files.get(i).toString().substring(33+cutAmount,pain008Files.get(i).toString().length());
System.out.println(fileName);
System.out.println(pain008Files.get(i).toString());
pain008Files.get(i).renameTo(new File("C:\\Users\\"+GetUser.currentUser+"\\Documents\\archive\\"+
"archivedPain_"+fileName));
}
}
public void moveCamtFiles(){
File camt54File = new File("C:\\Users\\"+GetUser.currentUser+"\\Documents\\Camt54 Files");
camt54Files = new ArrayList<File>(Arrays.asList(camt54File.listFiles()));
for(int i = 0; i < camt54Files.size(); i++){
int cutAmount = GetUser.currentUser.length();
String fileName = camt54Files.get(i).toString().substring(32+cutAmount,camt54Files.get(i).toString().length());
camt54Files.get(i).renameTo(new File("C:\\Users\\"+GetUser.currentUser+"\\Documents\\archive\\"+
"archivedCamt_"+fileName));
}
}
SHORT ANSWER:
Your code has some typo errors in routes or somewhere...
LONG ANSWER:
I adapted it to local testing in my computer and works fine.
public void movePainFiles() {
File pain008File = new File("C:\\tmp\\pain");
pain008Files = new ArrayList<File>(Arrays.asList(pain008File.listFiles()));
System.out.println(pain008Files);
for (int i = 0; i < pain008Files.size(); i++) {
System.out.println(pain008Files.get(i).toString());
pain008Files.get(i).renameTo(new File("C:\\tmp\\archive\\" + "archivedPain_" + pain008Files.get(i).getName()));
}
}
public void moveCamtFiles() {
File camt54File = new File("C:\\tmp\\camt");
camt54Files = new ArrayList<File>(Arrays.asList(camt54File.listFiles()));
for (int i = 0; i < camt54Files.size(); i++) {
System.out.println(camt54Files.get(i).toString());
camt54Files.get(i).renameTo(new File("C:\\tmp\\archive\\" + "archivedCamt_" + camt54Files.get(i).getName()));
}
}
OUTPUT:
C:\tmp\camt\xxx.pdf
C:\tmp\camt\yyy.pdf
C:\tmp\camt\zzz.pdf
[C:\tmp\pain\Q37024973.txt, C:\tmp\pain\Q37545784.txt]
C:\tmp\pain\Q37024973.txt
C:\tmp\pain\Q37545784.txt

How can I get more logging feedback when I have a test suite that uses #RunWith?

I have a custom test runner I've made to run a portion of my tests so I can distribute the tests on different jenkins nodes. If all my integration tests were run it would take an hour. So I have 3 servers running 1/3rd of the tests and this only takes 20 minutes total. Here's how my suite looks:
import junit.framework.JUnit4TestAdapter;
import junit.framework.TestSuite;
import org.junit.Ignore;
import org.junit.extensions.cpsuite.ClassesFinder;
import org.junit.extensions.cpsuite.ClasspathFinderFactory;
import org.junit.extensions.cpsuite.SuiteType;
import org.junit.runner.RunWith;
import org.junit.runners.AllTests;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.List;
#RunWith(AllTests.class)
public class DistributedIntegrationTestRunner {
private static Logger log = LoggerFactory.getLogger(DistributedIntegrationTestRunner.class);
public static TestSuite suite() {
TestSuite suite = new TestSuite();
ClassesFinder classesFinder = new ClasspathFinderFactory().create(true,
new String[]{".*IntegrationTest.*"},
new SuiteType[]{SuiteType.TEST_CLASSES},
new Class[]{Object.class},
new Class[]{},
"java.class.path");
int nodeNumber = systemPropertyInteger("node.number", "0");
int totalNodes = systemPropertyInteger("total.nodes", "1");
List<Class<?>> allTestsSorted = getAllTestsSorted(classesFinder);
allTestsSorted = filterIgnoredTests(allTestsSorted);
List<Class<?>> myTests = getMyTests(allTestsSorted, nodeNumber, totalNodes);
log.info("There are " + allTestsSorted.size() + " tests to choose from and I'm going to run " + myTests.size() + " of them.");
for (Class<?> myTest : myTests) {
log.info("I will run " + myTest.getName());
suite.addTest(new JUnit4TestAdapter(myTest));
}
return suite;
}
private static int systemPropertyInteger(String propertyKey, String defaultValue) {
String slaveNumberString = System.getProperty(propertyKey, defaultValue);
return Integer.parseInt(slaveNumberString);
}
private static List<Class<?>> filterIgnoredTests(List<Class<?>> allTestsSorted) {
ArrayList<Class<?>> filteredTests = new ArrayList<Class<?>>();
for (Class<?> aTest : allTestsSorted) {
if (aTest.getAnnotation(Ignore.class) == null) {
filteredTests.add(aTest);
}
}
return filteredTests;
}
/*
TODO: make this algorithm less naive. Sort each test by run duration as described here: http://blog.tradeshift.com/just-add-servers/
*/
private static List<Class<?>> getAllTestsSorted(ClassesFinder classesFinder) {
List<Class<?>> allTests = classesFinder.find();
Collections.sort(allTests, new Comparator<Class<?>>() {
#Override
public int compare(Class<?> o1, Class<?> o2) {
return o1.getSimpleName().compareTo(o2.getSimpleName());
}
});
return allTests;
}
private static List<Class<?>> getMyTests(List<Class<?>> allTests, int nodeNumber, int totalNodes) {
List<Class<?>> myTests = new ArrayList<Class<?>>();
for (int i = 0; i < allTests.size(); i++) {
Class<?> thisTest = allTests.get(i);
if (i % totalNodes == nodeNumber) {
myTests.add(thisTest);
}
}
return myTests;
}
}
This sort of works, except when I try to use multiple DistibutedIntegrationTestRunners in different modules and run them all at once, "things don't work". Normally I don't post on SO with a complaint that vague, but it's very difficult to figure out more because this code does not give much feedback. This is the only logging I get:
log.info("There are " + allTestsSorted.size() + " tests to choose from and I'm going to run " + myTests.size() + " of them.");
for (Class<?> myTest : myTests) {
log.info("I will run " + myTest.getName());
This happens before any of the tests are run. It would be very useful if I can get more logging than this. For example, it'd be very useful if I could print out "I'm about to run FooTest" right before that test runs. Is there any way to do this?
I've skimmed through the source code and have seen that there's a field named private final RunNotifier fNotifier; in the org.junit.internal.runners.JUnit38ClassRunner, but I'm not sure how to hook into that or if I'd be going in the right direction.
I'm using JUnit 4.10, but I can upgrade if necessary.
This looks like an XY problem to me.
Log files are not for testing. They are for monitoring an debugging. Once you have a failing test case, you need to switch from thinking about testing to thinking about debugging. That means running the failing test cases individually, perhaps using a debugger.
If your tests fail with unhelpful failure messages, that suggests you have poor low-level test coverage, or the diagnostic messages from your test cases are poor.

Categories

Resources