Updating parameter in SwingWorker - java

I need some help, I'm making a program like a file manager. In my program I need to make simultaneous files copies. For that I use SwingWorker to see the progress of the copies in a JProgressbar, but I need to know how to add more files to Copy in the task with the same destination.
This is my class that extends from Swingworker in my principal program I´ll select some files or folders to copy in one destination. What I need is while the Copytask is working I can to add more files to the Copyitem Arraylist.
Please help and sorry about my english.
import java.awt.Dimension;
import java.awt.Toolkit;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.util.ArrayList;
import java.util.List;
import javax.swing.JDialog;
import javax.swing.JOptionPane;
import javax.swing.JProgressBar;
import javax.swing.SwingWorker;
import xray.XRAYView;
public class CopyTask extends SwingWorker<Void, Integer>
{
ArrayList<CopyItem>copia;
private long totalBytes = 0L;
private long copiedBytes = 0L;
JProgressBar progressAll;
JProgressBar progressCurrent;
boolean override=true;
boolean overrideall=false;
public CopyTask(ArrayList<CopyItem>copia,JProgressBar progressAll,JProgressBar progressCurrent)
{
this.copia=copia;
this.progressAll=progressAll;
this.progressCurrent=progressCurrent;
progressAll.setValue(0);
progressCurrent.setValue(0);
totalBytes=retrieveTotalBytes(copia);
}
public void AgregarCopia(ArrayList<CopyItem>addcopia)throws Exception{
copia.addAll(copia.size(), addcopia);
totalBytes=retrieveTotalBytes(addcopia)+totalBytes;
System.out.println("AL AGREGAR: "+copia.size()+" Tamaño"+totalBytes);
}
public File getDriveDest(){
File dest=new File(copia.get(0).getOrigen().getPath().split("\\")[0]);
return dest;
}
#Override
public Void doInBackground() throws Exception
{
for(CopyItem cop:copia){
File ori=cop.getOrigen();
File des=new File(cop.getDestino().getPath());
if(!des.exists()){
des.mkdirs();
}
if(!overrideall){
override =true;
}
File para=new File(cop.getDestino().getPath()+"\\"+ori.getName());
copyFiles(ori, para);
}
return null;
}
#Override
public void process(List<Integer> chunks)
{
for(int i : chunks)
{
progressCurrent.setValue(i);
}
}
#Override
public void done()
{
setProgress(100);
}
private long retrieveTotalBytes(ArrayList<CopyItem>fich)
{
long size=0;
for(CopyItem cop: fich)
{
size += cop.getOrigen().length();
}
return size;
}
private void copyFiles(File sourceFile, File targetFile) throws IOException
{
if(overrideall==false){
if(targetFile.exists() && !targetFile.isDirectory()){
String []options={"Si a Todos","Si","No a Ninguno","No"};
int seleccion=JOptionPane.showOptionDialog(null, "El fichero \n"+targetFile+" \n se encuentra en el equipo, \n¿Desea sobreescribirlo?", "Colisión de ficheros", JOptionPane.DEFAULT_OPTION, JOptionPane.WARNING_MESSAGE, null, options, null);
switch(seleccion){
case 0:
override=true;
overrideall=true;
break;
case 1:
override=true;
overrideall=false;
break;
case 2:
override =false;
overrideall=true;
break;
case 3:
override =false;
overrideall=false;
break;
}
}
}
if(override || !targetFile.exists()){
FileInputStream LeeOrigen= new FileInputStream(sourceFile);
OutputStream Salida = new FileOutputStream(targetFile);
byte[] buffer = new byte[1024];
int tamaño;
long fileBytes = sourceFile.length();
long totalBytesCopied = 0;
while ((tamaño = LeeOrigen.read(buffer)) > 0) {
Salida.write(buffer, 0, tamaño);
totalBytesCopied += tamaño;
copiedBytes+= tamaño;
setProgress((int)Math.round(((double)copiedBytes++ / (double)totalBytes) * 100));
int progress = (int)Math.round(((double)totalBytesCopied / (double)fileBytes) * 100);
publish(progress);
}
Salida.close();
LeeOrigen.close();
publish(100);
}
}
}
Here is CopyItem class
import java.io.File;
public class CopyItem {
File origen;
File destino;
String root;
public CopyItem(File origen, File destino) {
this.origen = origen;
this.destino = destino;
}
public CopyItem(File origen, File destino, String root) {
this.origen = origen;
this.destino = destino;
this.root = root;
}
public String getRoot() {
return root;
}
public void setRoot(String root) {
this.root = root;
}
public File getOrigen() {
return origen;
}
public void setOrigen(File origen) {
this.origen = origen;
}
public File getDestino() {
return destino;
}
public void setDestino(File destino) {
this.destino = destino;
}
#Override
public String toString() {
return super.toString(); //To change body of generated methods, choose Tools | Templates.
}
}

yes you can add the files directly to source List(the list contains files to be copied ) but you need to synchronize your code because adding more file will be in different thread(UI Thread),another way is to implement (produce/consumer ) using BlockingQueue
Consumer class run in separate Thread or Swingworker coping files is in progress.
Producer class runs UI Thread (selecting more files).
both should have access to BlockingQueue (contains files to be copied)(of course BlockingQueue implementations are thread-safe based on the documentation. ,it has the advantage to block the execution and wait for the files to be added this is very useful if you dont know when the files are added )
I prefer using Thread Pool to manage the threads executions(Optional).

Related

Minecraft plugin hanging on "Enabling plugin" and producing out of memory errors

Why would this code be having memory issues? It runs fine once, and then when I try to run it again it hangs on "Enabling plugin". It'll then give me an OutOfMemoryException such as
"Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "Worker-Main-10""
The code I am using is as follows from the Spigot API
import org.bukkit.Bukkit;
import org.bukkit.ChatColor;
import org.bukkit.entity.Bat;
import org.bukkit.entity.Entity;
import org.bukkit.entity.Player;
import org.bukkit.plugin.java.JavaPlugin;
import org.bukkit.scheduler.BukkitScheduler;
import java.io.*;
import java.util.ArrayList;
import java.util.Scanner;
import java.util.UUID;
public class COVID19 extends JavaPlugin {
private static ArrayList<CovidInfection> infections;
#Override
public void onEnable() {
infections = new ArrayList<CovidInfection>();
System.out.println("1");
try {
readInfections();
} catch (FileNotFoundException fnfe) {
fnfe.printStackTrace();
}
System.out.println("2");
this.getCommand("getInfected").setExecutor(new CommandGetInfected());
BukkitScheduler scheduler = getServer().getScheduler();
scheduler.scheduleSyncRepeatingTask(this, new Runnable() {
#Override
public void run() {
batCovid();
}
}, 0, 10);
System.out.println(4);
}
#Override
public void onDisable() {
try {
writeInfections();
} catch (IOException ioe) {
ioe.printStackTrace();
}
}
public void batCovid() {
System.out.println(3);
for(Player player : Bukkit.getOnlinePlayers()) {
for(Entity nearby : player.getNearbyEntities(6, 6, 6)) {
if (nearby instanceof Bat) {
String name = player.getName();
UUID uuid = player.getUniqueId();
infections.add(new CovidInfection(uuid, name, 14));
}
}
}
}
public void readInfections() throws FileNotFoundException {
File file = new File("infected.txt");
if(file.length() == 0) {
return;
}
Scanner input = new Scanner(file);
String line = input.nextLine();
while (!(line.equals(""))) {
infections.add(parseInfectionLine(line));
}
input.close();
}
public void writeInfections() throws IOException {
//File will be written as UUID,Name,DaysRemaining
FileWriter writer = new FileWriter("infected.txt", false);
for(CovidInfection infection : infections) {
writer.write(infection.toString());
}
writer.close();
}
private CovidInfection parseInfectionLine(String line) {
String[] words = line.replace("\n","").split(",");
return new CovidInfection(UUID.fromString(words[0]), words[1], Integer.parseInt(words[2]));
}
public static String getInfected() {
String compiled = "";
for (CovidInfection infection : infections) {
compiled += infection.toString() + "\n";
}
return compiled;
}
}
import org.bukkit.ChatColor;
import org.bukkit.command.Command;
import org.bukkit.command.CommandExecutor;
import org.bukkit.command.CommandSender;
import org.bukkit.entity.Player;
public class CommandGetInfected implements CommandExecutor {
#Override
public boolean onCommand(CommandSender sender, Command cmd, String label, String[] args) {
String message = COVID19.getInfected();
if(!(message.equals(""))) {
sender.sendMessage(message);
} else {
sender.sendMessage("There are no infected!");
}
return(true);
}
}
import java.util.UUID;
public class CovidInfection {
private UUID uuid;
private String name;
private int days;
public CovidInfection(UUID uuid, String name, int days) {
this.uuid = uuid;
this.name = name;
this.days = days;
}
public int getDays() {
return days;
}
public String getName() {
return name;
}
public UUID getUuid() {
return uuid;
}
public void newDay() {
days--;
}
public String toString() {
return uuid.toString() + "," + name + "," + days + "\n";
}
}
Any help would be greatly appreciated, thank you!
Firstly, you are make I/O request on main thread.
To fix this issue, use multithreading such as explained here or here
Then, this :
Scanner input = new Scanner(file);
String line = input.nextLine();
Can't be used in a server.
An input like that already exist, it's the console sender.
To do that, I suggest you to use ServerCommandEvent and use spigot's console.

StreamingFileSink bulk writer results in some checkpoint error when running in AWS EMR

Unable to use StreamingFileSink and store incoming events in compressed fashion.
I am trying to use StreamingFileSink to write unbounded event stream to S3. In the process, I would like to compress the data to make better use of storage size available.
I wrote a compressed string writer, by borrowing some code from SequenceFileWriterFactory from flink. It fails with the exception I described below.
If I try to use BucketingSink, it works great.
Using BucketingSink, I approached compressed string write as below. Again, I borrowed this code from some other pull request.
import org.apache.flink.streaming.connectors.fs.StreamWriterBase;
import org.apache.flink.streaming.connectors.fs.Writer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.compress.CodecPool;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.CompressionCodecFactory;
import org.apache.hadoop.io.compress.CompressionOutputStream;
import org.apache.hadoop.io.compress.Compressor;
import java.io.IOException;
public class CompressionStringWriter<T> extends StreamWriterBase<T> implements Writer<T> {
private static final long serialVersionUID = 3231207311080446279L;
private String codecName;
private String separator;
public String getCodecName() {
return codecName;
}
public String getSeparator() {
return separator;
}
private transient CompressionOutputStream compressedOutputStream;
public CompressionStringWriter(String codecName, String separator) {
this.codecName = codecName;
this.separator = separator;
}
public CompressionStringWriter(String codecName) {
this(codecName, System.lineSeparator());
}
protected CompressionStringWriter(CompressionStringWriter<T> other) {
super(other);
this.codecName = other.codecName;
this.separator = other.separator;
}
#Override
public void open(FileSystem fs, Path path) throws IOException {
super.open(fs, path);
Configuration conf = fs.getConf();
CompressionCodecFactory codecFactory = new CompressionCodecFactory(conf);
CompressionCodec codec = codecFactory.getCodecByName(codecName);
if (codec == null) {
throw new RuntimeException("Codec " + codecName + " not found");
}
Compressor compressor = CodecPool.getCompressor(codec, conf);
compressedOutputStream = codec.createOutputStream(getStream(), compressor);
}
#Override
public void close() throws IOException {
if (compressedOutputStream != null) {
compressedOutputStream.close();
compressedOutputStream = null;
} else {
super.close();
}
}
#Override
public void write(Object element) throws IOException {
getStream();
compressedOutputStream.write(element.toString().getBytes());
compressedOutputStream.write(this.separator.getBytes());
}
#Override
public CompressionStringWriter<T> duplicate() {
return new CompressionStringWriter<>(this);
}
}
BucketingSink<DeviceEvent> bucketingSink = new BucketingSink<>("s3://"+ this.bucketName + "/" + this.objectPrefix);
bucketingSink
.setBucketer(new OrgIdBasedBucketAssigner())
.setWriter(new CompressionStringWriter<DeviceEvent>("Gzip", "\n"))
.setPartPrefix("file-")
.setPartSuffix(".gz")
.setBatchSize(1_500_000);
The one with BucketingSink works.
Now my code snippets using StreamingFileSink involves the below set of code.
import org.apache.flink.api.common.serialization.BulkWriter;
import java.io.IOException;
public class CompressedStringBulkWriter<T> implements BulkWriter<T> {
private final CompressedStringWriter compressedStringWriter;
public CompressedStringBulkWriter(final CompressedStringWriter compressedStringWriter) {
this.compressedStringWriter = compressedStringWriter;
}
#Override
public void addElement(T element) throws IOException {
this.compressedStringWriter.write(element);
}
#Override
public void flush() throws IOException {
this.compressedStringWriter.flush();
}
#Override
public void finish() throws IOException {
this.compressedStringWriter.close();
}
}
import org.apache.flink.api.common.serialization.BulkWriter;
import org.apache.flink.core.fs.FSDataOutputStream;
import org.apache.hadoop.conf.Configuration;
import java.io.IOException;
public class CompressedStringBulkWriterFactory<T> implements BulkWriter.Factory<T> {
private SerializableHadoopConfiguration serializableHadoopConfiguration;
public CompressedStringBulkWriterFactory(final Configuration hadoopConfiguration) {
this.serializableHadoopConfiguration = new SerializableHadoopConfiguration(hadoopConfiguration);
}
#Override
public BulkWriter<T> create(FSDataOutputStream out) throws IOException {
return new CompressedStringBulkWriter(new CompressedStringWriter(out, serializableHadoopConfiguration.get(), "Gzip", "\n"));
}
}
import org.apache.flink.core.fs.FSDataOutputStream;
import org.apache.flink.core.fs.FileSystem;
import org.apache.flink.core.fs.Path;
import org.apache.flink.runtime.fs.hdfs.HadoopFileSystem;
import org.apache.flink.util.Preconditions;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.compress.CodecPool;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.CompressionCodecFactory;
import org.apache.hadoop.io.compress.CompressionOutputStream;
import org.apache.hadoop.io.compress.Compressor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.io.Serializable;
public class CompressedStringWriter<T> implements Serializable {
private static final Logger LOG = LoggerFactory.getLogger(CompressedStringWriter.class);
private static final long serialVersionUID = 2115292142239557448L;
private String separator;
private transient CompressionOutputStream compressedOutputStream;
public CompressedStringWriter(FSDataOutputStream out, Configuration hadoopConfiguration, String codecName, String separator) {
this.separator = separator;
try {
Preconditions.checkNotNull(hadoopConfiguration, "Unable to determine hadoop configuration using path");
CompressionCodecFactory codecFactory = new CompressionCodecFactory(hadoopConfiguration);
CompressionCodec codec = codecFactory.getCodecByName(codecName);
Preconditions.checkNotNull(codec, "Codec " + codecName + " not found");
LOG.info("The codec name that was loaded from hadoop {}", codec);
Compressor compressor = CodecPool.getCompressor(codec, hadoopConfiguration);
this.compressedOutputStream = codec.createOutputStream(out, compressor);
LOG.info("Setup a compressor for codec {} and compressor {}", codec, compressor);
} catch (IOException ex) {
throw new RuntimeException("Unable to compose a hadoop compressor for the path", ex);
}
}
public void flush() throws IOException {
if (compressedOutputStream != null) {
compressedOutputStream.flush();
}
}
public void close() throws IOException {
if (compressedOutputStream != null) {
compressedOutputStream.close();
compressedOutputStream = null;
}
}
public void write(T element) throws IOException {
compressedOutputStream.write(element.toString().getBytes());
compressedOutputStream.write(this.separator.getBytes());
}
}
import org.apache.hadoop.conf.Configuration;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.Serializable;
public class SerializableHadoopConfiguration implements Serializable {
private static final long serialVersionUID = -1960900291123078166L;
private transient Configuration hadoopConfig;
SerializableHadoopConfiguration(Configuration hadoopConfig) {
this.hadoopConfig = hadoopConfig;
}
Configuration get() {
return this.hadoopConfig;
}
// --------------------
private void writeObject(ObjectOutputStream out) throws IOException {
this.hadoopConfig.write(out);
}
private void readObject(ObjectInputStream in) throws IOException {
final Configuration config = new Configuration();
config.readFields(in);
if (this.hadoopConfig == null) {
this.hadoopConfig = config;
}
}
}
My actual flink job
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
Properties kinesisConsumerConfig = new Properties();
...
...
DataStream<DeviceEvent> kinesis =
env.addSource(new FlinkKinesisConsumer<>(this.streamName, new DeviceEventSchema(), kinesisConsumerConfig)).name("source")
.setParallelism(16)
.setMaxParallelism(24);
final StreamingFileSink<DeviceEvent> bulkCompressStreamingFileSink = StreamingFileSink.<DeviceEvent>forBulkFormat(
path,
new CompressedStringBulkWriterFactory<>(
BucketingSink.createHadoopFileSystem(
new Path("s3a://"+ this.bucketName + "/" + this.objectPrefix),
null).getConf()))
.withBucketAssigner(new OrgIdBucketAssigner())
.build();
deviceEventDataStream.addSink(bulkCompressStreamingFileSink).name("bulkCompressStreamingFileSink").setParallelism(16);
env.execute();
I expect data to be saved in S3 as multiple files. Unfortunately no files are being created.
In the logs, I see below exception
2019-05-15 22:17:20,855 INFO org.apache.flink.runtime.taskmanager.Task - Sink: bulkCompressStreamingFileSink (11/16) (c73684c10bb799a6e0217b6795571e22) switched from RUNNING to FAILED.
java.lang.Exception: Could not perform checkpoint 1 for operator Sink: bulkCompressStreamingFileSink (11/16).
at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:595)
at org.apache.flink.streaming.runtime.io.BarrierBuffer.notifyCheckpoint(BarrierBuffer.java:396)
at org.apache.flink.streaming.runtime.io.BarrierBuffer.processBarrier(BarrierBuffer.java:292)
at org.apache.flink.streaming.runtime.io.BarrierBuffer.getNextNonBlocked(BarrierBuffer.java:200)
at org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:209)
at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Could not complete snapshot 1 for operator Sink: bulkCompressStreamingFileSink (11/16).
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:422)
at org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1113)
at org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1055)
at org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:729)
at org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:641)
at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:586)
... 8 more
Caused by: java.io.IOException: Stream closed.
at org.apache.flink.fs.s3.common.utils.RefCountedFile.requireOpened(RefCountedFile.java:117)
at org.apache.flink.fs.s3.common.utils.RefCountedFile.write(RefCountedFile.java:74)
at org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.flush(RefCountedBufferingFileStream.java:105)
at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.closeAndUploadPart(S3RecoverableFsDataOutputStream.java:199)
at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.closeForCommit(S3RecoverableFsDataOutputStream.java:166)
at org.apache.flink.streaming.api.functions.sink.filesystem.PartFileWriter.closeForCommit(PartFileWriter.java:71)
at org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:63)
at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:239)
at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:280)
at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:253)
at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:244)
at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:235)
at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.snapshotState(StreamingFileSink.java:347)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.trySnapshotFunctionState(StreamingFunctionUtils.java:118)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.snapshotFunctionState(StreamingFunctionUtils.java:99)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.snapshotState(AbstractUdfStreamOperator.java:90)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:395)
So wondering, what am I missing.
I am using AWS EMR latest (5.23).
In CompressedStringBulkWriter#close(), you are calling the close() method on the CompressionCodecStream which also closes the underlying the stream i.e. Flink's FSDataOutputStream. It has to be opened for the checkpointing to be done properly by Flink's internal to guarantee recoverable stream. That is why you are getting
Caused by: java.io.IOException: Stream closed.
at org.apache.flink.fs.s3.common.utils.RefCountedFile.requireOpened(RefCountedFile.java:117)
at org.apache.flink.fs.s3.common.utils.RefCountedFile.write(RefCountedFile.java:74)
at org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.flush(RefCountedBufferingFileStream.java:105)
at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.closeAndUploadPart(S3RecoverableFsDataOutputStream.java:199)
at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.closeForCommit(S3RecoverableFsDataOutputStream.java:166)
So instead of compressedOutputStream.close(), use compressedOutputStream.finish() which just flushes everything that's in buffer to the outputstream without closing it. BTW, there is an inbuilt HadoopCompressionBulkWriter made available in the latest version Flink, you can also use that.

Serialization, saving objects

I have some problem with saving object to a file. I have class FileManager which contains method that saves object to file. This method is used in class Control which contains main loop (choosing different options). I would like to save object with choosing option EXIt but nothing happens. When I add new option (i.e. 6 - Save database) program works fine. I will be grateful for any clues what can be wrong.
class FileManager {
package utils;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.ObjectOutputStream;
import data.DataBase;
public class FileManager {
public static final String FILE_NAME = "file.txt";
public void writeDataBaseToFile(DataBase db) {
try (
FileOutputStream fos = new FileOutputStream(FILE_NAME);
ObjectOutputStream oos = new ObjectOutputStream(fos);
) {
oos.writeObject(db);
} catch (FileNotFoundException e) {
System.err.println("Błąd");
} catch (IOException e) {
System.err.println("Błąd");
}
}
}
Control Class :
class Control {
package app;
import data.DataBase;
import data.Expense;
import data.Income;
import utils.AccountInfo;
import utils.AddData;
import utils.FileManager;
import utils.Info;
import utils.Options;
public class Control {
private AccountInfo account;
private AddData addData;
private DataBase dataBase;
private Info inf;
private Income income;
private FileManager fileManager;
public Control() {
addData = new AddData();
dataBase = new DataBase();
inf = new Info(dataBase);
account = new AccountInfo(dataBase);
fileManager = new FileManager();
}
public void ControlLoop() {
Options option;
printOptions();
while((option = Options.createOption(addData.getOption())) != Options.EXIT) {
try {
switch(option) {
case ADD_INCOME:
addIncome();
break;
case ADD_EXPENSE:
addExpense();
break;
case PRINT_INCOME:
printIncome();
break;
case PRINT_EXPENSE:
printExpense();
break;
case RESUME_ACCOUNT:
resumeAccount();
break;
case EXIT:
saveData();
}
} catch(NullPointerException ex) {
}
printOptions();
}
addData.close();
}
public void addIncome() {
income = addData.createIncome();
dataBase.addBudget(income);
}
public void addExpense() {
Expense expense = addData.createExpense();
dataBase.addBudget(expense);
}
public void printIncome() {
inf.printIncome();
}
public void printExpense() {
inf.printExpense();
}
public void resumeAccount() {
account.resumeIncome();
account.resumeExpense();
}
public void saveData() {
fileManager.writeDataBaseToFile(dataBase);
}
public void printOptions() {
System.out.println("Wybierz opcję:");
for(int i=0; i<6; i++) {
System.out.println(Options.values()[i]);
}
}
}
Your code can never reach the EXIT case.
Because when option is EXIT, it terminates the loop.
while ((option=...) != Options.EXIT) {
// execute loop body when option is not EXIT
switch (option) {
...
case Options.EXIT: // <-- it can simply not reach here. not ever.
saveData();
}
Try move saveData() outside the while loop.
while (...) { // process options
}
// We are exiting, save data.
saveData();
addData.close();
P.S. You need to close the output stream in your FileManager.

SWT Component for choose file only from workspace

I work with SWT and Eclipse's plugin. I need choose file only from workspace. I founded component for choose directory in workspace, component for choose file in file system, but i have not found component for choose file from workspace.
Now I'm using org.eclipse.swt.widgets.FileDialog and set filter setFilterPath(Platform.getLocation().toOSString()). But the user can choose other file not from workspace. They should only be able to set files from within the workspace.
Thank for answers. I create own component and use it. Also i add filter for choose files.
import java.util.ArrayList;
import java.util.List;
import org.eclipse.core.resources.IContainer;
import org.eclipse.core.resources.IFile;
import org.eclipse.core.resources.IProject;
import org.eclipse.core.resources.IResource;
import org.eclipse.core.resources.ResourcesPlugin;
import org.eclipse.core.runtime.CoreException;
import org.eclipse.core.runtime.IStatus;
import org.eclipse.core.runtime.Status;
import org.eclipse.jface.viewers.ILabelProvider;
import org.eclipse.jface.viewers.ITreeContentProvider;
import org.eclipse.jface.viewers.Viewer;
import org.eclipse.swt.widgets.Display;
import org.eclipse.swt.widgets.Shell;
import org.eclipse.ui.dialogs.ElementTreeSelectionDialog;
import org.eclipse.ui.dialogs.ISelectionStatusValidator;
import org.eclipse.ui.model.WorkbenchLabelProvider;
/**
* #author Alexey Prybytkouski
*/
public class ResourceFileSelectionDialog extends ElementTreeSelectionDialog {
private String[] extensions;
private static ITreeContentProvider contentProvider = new ITreeContentProvider() {
public Object[] getChildren(Object element) {
if (element instanceof IContainer) {
try {
return ((IContainer) element).members();
}
catch (CoreException e) {
}
}
return null;
}
public Object getParent(Object element) {
return ((IResource) element).getParent();
}
public boolean hasChildren(Object element) {
return element instanceof IContainer;
}
public Object[] getElements(Object input) {
return (Object[]) input;
}
public void dispose() {
}
public void inputChanged(Viewer viewer, Object oldInput, Object newInput) {
}
};
private static final IStatus OK = new Status(IStatus.OK, PLUGIN_ID, 0, "", null);
private static final IStatus ERROR = new Status(IStatus.ERROR, PLUGIN_ID, 0, "", null);
/*
* Validator
*/
private ISelectionStatusValidator validator = new ISelectionStatusValidator() {
public IStatus validate(Object[] selection) {
return selection.length == 1 && selection[0] instanceof IFile
&& checkExtension(((IFile) selection[0]).getFileExtension()) ? OK : ERROR;
}
};
public ResourceFileSelectionDialog(String title, String message, String[] type) {
this(Display.getDefault().getActiveShell(), WorkbenchLabelProvider.getDecoratingWorkbenchLabelProvider(),
contentProvider);
this.extensions = type;
setTitle(title);
setMessage(message);
setInput(computeInput());
setValidator(validator);
}
public ResourceFileSelectionDialog(Shell parent, ILabelProvider labelProvider, ITreeContentProvider contentProvider) {
super(parent, labelProvider, contentProvider);
}
/*
* Show projects
*/
private Object[] computeInput() {
/*
* Refresh projects tree.
*/
IProject[] projects = ResourcesPlugin.getWorkspace().getRoot().getProjects();
for (int i = 0; i < projects.length; i++) {
try {
projects[i].refreshLocal(IResource.DEPTH_INFINITE, null);
} catch (CoreException e) {
e.printStackTrace();
}
}
try {
ResourcesPlugin.getWorkspace().getRoot().refreshLocal(IResource.DEPTH_ONE, null);
} catch (CoreException e) {
}
List<IProject> openProjects = new ArrayList<IProject>(projects.length);
for (int i = 0; i < projects.length; i++) {
if (projects[i].isOpen()) {
openProjects.add(projects[i]);
}
}
return openProjects.toArray();
}
/*
* Check file extension
*/
private boolean checkExtension(String name) {
if (name.equals("*")) {
return true;
}
for (int i = 0; i < extensions.length; i++) {
if (extensions[i].equals(name)) {
return true;
}
}
return false;
}
}
and call:
ResourceFileSelectionDialog dialog = new ResourceFileSelectionDialog("Title", "Message", new String[] { "properties" });
dialog.open();
Try this. With this you should be able to browse through workspace.
You need to add eclipse.ui and resources plugins as dependencies.
ElementTreeSelectionDialog dialog = new ElementTreeSelectionDialog(
Display.getDefault().getActiveShell(),
new WorkbenchLabelProvider(),
new BaseWorkbenchContentProvider());
dialog.open();
I dont know any SWT Component that provides you that kind of control over user interaction.
So, I think the best solution here is:
You can develop a window that reads the content of the folder, display it to the user, and give him no navegation possibiliites other than subfolders of the root folder (workspace folder in your case).
See this examples:
http://www.ibm.com/developerworks/opensource/library/os-ecgui1/
http://www.ibm.com/developerworks/library/os-ecgui2/

How to load media resources from the classpath in JMF

I have a Java application that I want to turn into an executable jar. I am using JMF in this application, and I can't seem to get the sound files working right...
I create the jar using
jar cvfm jarname.jar manifest.txt *.class *.gif *.wav
So, all the sound files get put inside the jar, and in the code, I am creating the Players using
Player player = Manager.createPlayer(ClassName.class.getResource("song1.wav"));
The jar is on my desktop, and when I attempt to run it, this exception occurs:
javax.media.NoPlayerException: Cannot find a Player for :jar:file:/C:/Users/Pojo/
Desktop/jarname.jar!/song1.wav
...It's not getting IOExceptions, so it seems to at least be finding the file itself all right.
Also, before I used the getResource, I used to have it like this:
Player player = Manager.createPlayer(new File("song1.wav").toURL());
and it was playing fine, so I know nothing is wrong with the sound file itself.
The reason I am trying to switch to this method instead of the File method is so that the sound files can be packaged inside the jar itself and not have to be its siblings in a directory.
This is a far cry from production code, but this seems resolved any runtime exceptions (though it's not actually wired up to play anything yet):
import javax.media.Manager;
import javax.media.Player;
import javax.media.protocol.URLDataSource;
// ...
URL url = JmfTest.class.getResource("song1.wav");
System.out.println("url: " + url);
URLDataSource uds = new URLDataSource(url);
uds.connect();
Player player = Manager.createPlayer(uds);
New solution:
First, a custom DataSource class that returns a SourceStream that implements Seekable is needed:
package com.ziesemer.test;
import java.io.Closeable;
import java.io.IOException;
import java.io.InputStream;
import java.net.JarURLConnection;
import java.net.URL;
import java.util.jar.JarEntry;
import java.util.jar.JarFile;
import javax.media.Duration;
import javax.media.MediaLocator;
import javax.media.Time;
import javax.media.protocol.ContentDescriptor;
import javax.media.protocol.PullDataSource;
import javax.media.protocol.PullSourceStream;
import javax.media.protocol.Seekable;
/**
* #author Mark A. Ziesemer
* <www.ziesemer.com>
*/
public class JarDataSource extends PullDataSource{
protected JarURLConnection conn;
protected ContentDescriptor contentType;
protected JarPullSourceStream[] sources;
protected boolean connected;
public JarDataSource(URL url) throws IOException{
setLocator(new MediaLocator(url));
connected = false;
}
#Override
public PullSourceStream[] getStreams(){
return sources;
}
#Override
public void connect() throws IOException{
conn = (JarURLConnection)getLocator().getURL().openConnection();
conn.connect();
connected = true;
JarFile jf = conn.getJarFile();
JarEntry je = jf.getJarEntry(conn.getEntryName());
String mimeType = conn.getContentType();
if(mimeType == null){
mimeType = ContentDescriptor.CONTENT_UNKNOWN;
}
contentType = new ContentDescriptor(ContentDescriptor.mimeTypeToPackageName(mimeType));
sources = new JarPullSourceStream[1];
sources[0] = new JarPullSourceStream(jf, je, contentType);
}
#Override
public String getContentType(){
return contentType.getContentType();
}
#Override
public void disconnect(){
if(connected){
try{
sources[0].close();
}catch(IOException e){
e.printStackTrace();
}
connected = false;
}
}
#Override
public void start() throws IOException{
// Nothing to do.
}
#Override
public void stop() throws IOException{
// Nothing to do.
}
#Override
public Time getDuration(){
return Duration.DURATION_UNKNOWN;
}
#Override
public Object[] getControls(){
return new Object[0];
}
#Override
public Object getControl(String controlName){
return null;
}
protected class JarPullSourceStream implements PullSourceStream, Seekable, Closeable{
protected final JarFile jarFile;
protected final JarEntry jarEntry;
protected final ContentDescriptor type;
protected InputStream stream;
protected long position;
public JarPullSourceStream(JarFile jarFile, JarEntry jarEntry, ContentDescriptor type) throws IOException{
this.jarFile = jarFile;
this.jarEntry = jarEntry;
this.type = type;
this.stream = jarFile.getInputStream(jarEntry);
}
#Override
public ContentDescriptor getContentDescriptor(){
return type;
}
#Override
public long getContentLength(){
return jarEntry.getSize();
}
#Override
public boolean endOfStream(){
return position < getContentLength();
}
#Override
public Object[] getControls(){
return new Object[0];
}
#Override
public Object getControl(String controlType){
return null;
}
#Override
public boolean willReadBlock(){
if(endOfStream()){
return true;
}
try{
return stream.available() == 0;
}catch(IOException e){
return true;
}
}
#Override
public int read(byte[] buffer, int offset, int length) throws IOException{
int read = stream.read(buffer, offset, length);
position += read;
return read;
}
#Override
public long seek(long where){
try{
if(where < position){
stream.close();
stream = jarFile.getInputStream(jarEntry);
position = 0;
}
long skip = where - position;
while(skip > 0){
long skipped = stream.skip(skip);
skip -= skipped;
position += skipped;
}
}catch(IOException ioe){
// Made a best effort.
ioe.printStackTrace();
}
return position;
}
#Override
public long tell(){
return position;
}
#Override
public boolean isRandomAccess(){
return true;
}
#Override
public void close() throws IOException{
try{
stream.close();
}finally{
jarFile.close();
}
}
}
}
Then, the above custom data source is used to create a player, and a ControllerListener is added to cause the player to loop:
package com.ziesemer.test;
import java.net.URL;
import javax.media.ControllerEvent;
import javax.media.ControllerListener;
import javax.media.EndOfMediaEvent;
import javax.media.Manager;
import javax.media.Player;
import javax.media.Time;
/**
* #author Mark A. Ziesemer
* <www.ziesemer.com>
*/
public class JmfTest{
public static void main(String[] args) throws Exception{
URL url = JmfTest.class.getResource("Test.wav");
JarDataSource jds = new JarDataSource(url);
jds.connect();
final Player player = Manager.createPlayer(jds);
player.addControllerListener(new ControllerListener(){
#Override
public void controllerUpdate(ControllerEvent ce){
if(ce instanceof EndOfMediaEvent){
player.setMediaTime(new Time(0));
player.start();
}
}
});
player.start();
}
}
Note that without the custom data source, JMF tries repeatedly to seek back to the beginning - but fails, and eventually gives up. This can be seen from debugging the same ControllerListener, which will receive a several events for each attempt.
Or, using the MediaPlayer approach to loop (that you mentioned on my previous answer):
package com.ziesemer.test;
import java.net.URL;
import javax.media.Manager;
import javax.media.Player;
import javax.media.bean.playerbean.MediaPlayer;
/**
* #author Mark A. Ziesemer
* <www.ziesemer.com>
*/
public class JmfTest{
public static void main(String[] args) throws Exception{
URL url = JmfTest.class.getResource("Test.wav");
JarDataSource jds = new JarDataSource(url);
jds.connect();
final Player player = Manager.createPlayer(jds);
MediaPlayer mp = new MediaPlayer();
mp.setPlayer(player);
mp.setPlaybackLoop(true);
mp.start();
}
}
Again, I would not consider this production-ready code (could use some more Javadocs and logging, etc.), but it is tested and working (Java 1.6), and should meet your needs nicely.
Merry Christmas, and happy holidays!
Manager.createPlayer(this.getClass().getResource("/song1.wav"));
That will work if the song1.wav is in the root of a Jar that is on the run-time class-path of the application.

Categories

Resources