I have the following code which merges two audio files into one:
import java.io.File;
import java.io.IOException;
import java.io.SequenceInputStream;
import javax.sound.sampled.AudioFileFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
public class WavAppender {
public static void main(String[] args) {
String wavFile1 = "D:\\wav1.wav";
String wavFile2 = "D:\\wav2.wav";
try {
AudioInputStream clip1 = AudioSystem.getAudioInputStream(new File(wavFile1));
AudioInputStream clip2 = AudioSystem.getAudioInputStream(new File(wavFile2));
AudioInputStream appendedFiles =
new AudioInputStream(
new SequenceInputStream(clip1, clip2),
clip1.getFormat(),
clip1.getFrameLength() + clip2.getFrameLength());
AudioSystem.write(appendedFiles,
AudioFileFormat.Type.WAVE,
new File("D:\\wavAppended.wav"));
} catch (Exception e) {
e.printStackTrace();
}
}
}
I will have a string in the following format [1,2,3,4,5]. Based on the string I will need to select the appropriate wav file. For example if the string is in the format [3,4,5,6,7], I will need to send wavfile 3, wavfile4, wavfile5, wavfile6 and wavfile 7. What is the best way to achieve this?
Create an array or List of items, so that wavfile1 is at index 0, wavfile2 is at index 1 and so on and so forth.
Take each element from the String array and convert to to int, subtract one from it (as arrays and lists are zero indexed) and that becomes your index for the "wave file array"...
String waveFile = waveFiles[Integer.parseInt(indicies[0]) - 1];
...Now this prone to some issues, particularly the conversion of the String to int...
Instead, you could use a Map instead, where each wave file is mapped to the corresponding String id
Map<String, String> waveFiles = new ...;
waveFiles.put("1", "WaveFile1");
waveFiles.put("2", "WaveFile2");
//...
Then you would simply use the value from the String array to look it up...
String waveFile = waveFiles.get(indicies[0]);
As some ideas...
Take a look at the Collections Trail for more details and ideas...
Related
How would you write a java function boolean sameContent(Path file1,Path file2)which determines if the two given paths point to files which store the same content? Of course, first, I would check if the file sizes are the same. This is a necessary condition for storing the same content. But then I'd like to listen to your approaches. If the two files are stored on the same hard drive (like in most of my cases) it's probably not the best way to jump too many times between the two streams.
Exactly what FileUtils.contentEquals method of Apache commons IO does and api is here.
Try something like:
File file1 = new File("file1.txt");
File file2 = new File("file2.txt");
boolean isTwoEqual = FileUtils.contentEquals(file1, file2);
It does the following checks before actually doing the comparison:
existence of both the files
Both file's that are passed are to be of file type and not directory.
length in bytes should not be the same.
Both are different files and not one and the same.
Then compare the contents.
If you don't want to use any external libraries, then simply read the files into byte arrays and compare them (won't work pre Java-7):
byte[] f1 = Files.readAllBytes(file1);
byte[] f2 = Files.readAllBytes(file2);
by using Arrays.equals.
If the files are large, then instead of reading the entire files into arrays, you should use BufferedInputStream and read the files chunk-by-chunk as explained here.
Since Java 12 there is method Files.mismatch which returns -1 if there is no mismatch in the content of the files. Thus the function would look like following:
private static boolean sameContent(Path file1, Path file2) throws IOException {
return Files.mismatch(file1, file2) == -1;
}
If the files are small, you can read both into the memory and compare the byte arrays.
If the files are not small, you can either compute the hashes of their content (e.g. MD5 or SHA-1) one after the other and compare the hashes (but this still leaves a very small chance of error), or you can compare their content but for this you still have to read the streams alternating.
Here is an example:
boolean sameContent(Path file1, Path file2) throws IOException {
final long size = Files.size(file1);
if (size != Files.size(file2))
return false;
if (size < 4096)
return Arrays.equals(Files.readAllBytes(file1), Files.readAllBytes(file2));
try (InputStream is1 = Files.newInputStream(file1);
InputStream is2 = Files.newInputStream(file2)) {
// Compare byte-by-byte.
// Note that this can be sped up drastically by reading large chunks
// (e.g. 16 KBs) but care must be taken as InputStream.read(byte[])
// does not neccessarily read a whole array!
int data;
while ((data = is1.read()) != -1)
if (data != is2.read())
return false;
}
return true;
}
This should help you with your problem:
package test;
import java.io.File;
import java.io.IOException;
import org.apache.commons.io.FileUtils;
public class CompareFileContents {
public static void main(String[] args) throws IOException {
File file1 = new File("test1.txt");
File file2 = new File("test2.txt");
File file3 = new File("test3.txt");
boolean compare1and2 = FileUtils.contentEquals(file1, file2);
boolean compare2and3 = FileUtils.contentEquals(file2, file3);
boolean compare1and3 = FileUtils.contentEquals(file1, file3);
System.out.println("Are test1.txt and test2.txt the same? " + compare1and2);
System.out.println("Are test2.txt and test3.txt the same? " + compare2and3);
System.out.println("Are test1.txt and test3.txt the same? " + compare1and3);
}
}
If it for unit test, then AssertJ provides a method named hasSameContentAs. An example:
Assertions.assertThat(file1).hasSameContentAs(file2)
I know I'm pretty late to the party on this one, but memory mapped IO is a pretty simple way to do this if you want to use straight Java APIs and no third party dependencies. It's only a few calls to open the files, map them, and then compare use ByteBuffer.equals(Object) to compare the files.
This is probably going to give you the best performance if you expect the particular file to be large because you're offloading a majority of the IO legwork onto the OS and the otherwise highly optimized bits of the JVM (assuming you're using a decent JVM).
Straight from the
FileChannel JavaDoc:
For most operating systems, mapping a file into memory is more expensive than reading or writing a few tens of kilobytes of data via the usual read and write methods. From the standpoint of performance it is generally only worth mapping relatively large files into memory.
import java.io.IOException;
import java.nio.MappedByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
public class MemoryMappedCompare {
public static boolean areFilesIdenticalMemoryMapped(final Path a, final Path b) throws IOException {
try (final FileChannel fca = FileChannel.open(a, StandardOpenOption.READ);
final FileChannel fcb = FileChannel.open(b, StandardOpenOption.READ)) {
final MappedByteBuffer mbba = fca.map(FileChannel.MapMode.READ_ONLY, 0, fca.size());
final MappedByteBuffer mbbb = fcb.map(FileChannel.MapMode.READ_ONLY, 0, fcb.size());
return mbba.equals(mbbb);
}
}
}
It's >=JR6 compatible, library-free and don't read all content at time.
public static boolean sameFile(File a, File b) {
if (a == null || b == null) {
return false;
}
if (a.getAbsolutePath().equals(b.getAbsolutePath())) {
return true;
}
if (!a.exists() || !b.exists()) {
return false;
}
if (a.length() != b.length()) {
return false;
}
boolean eq = true;
FileChannel channelA;
FileChannel channelB;
try {
channelA = new RandomAccessFile(a, "r").getChannel();
channelB = new RandomAccessFile(b, "r").getChannel();
long channelsSize = channelA.size();
ByteBuffer buff1 = channelA.map(FileChannel.MapMode.READ_ONLY, 0, channelsSize);
ByteBuffer buff2 = channelB.map(FileChannel.MapMode.READ_ONLY, 0, channelsSize);
for (int i = 0; i < channelsSize; i++) {
if (buff1.get(i) != buff2.get(i)) {
eq = false;
break;
}
}
} catch (FileNotFoundException ex) {
Logger.getLogger(HotUtils.class.getName()).log(Level.SEVERE, null, ex);
} catch (IOException ex) {
Logger.getLogger(HotUtils.class.getName()).log(Level.SEVERE, null, ex);
}
return eq;
}
package test;
import org.junit.jupiter.api.Test;
import java.io.IOException;
import java.nio.file.FileSystems;
import java.nio.file.Files;
import java.nio.file.Path;
import static org.junit.Assert.assertEquals;
public class CSVResultDIfference {
#Test
public void csvDifference() throws IOException {
Path file_F = FileSystems.getDefault().getPath("C:\\Projekts\\csvTestX", "yolo2.csv");
long size_F = Files.size(file_F);
Path file_I = FileSystems.getDefault().getPath("C:\\Projekts\\csvTestZ", "yolo2.csv");
long size_I = Files.size(file_I);
assertEquals(size_F, size_I);
}
}
it worked for me :)
I'm reading and writing to a ByteBuffer
import org.assertj.core.api.Assertions;
import java.io.IOException;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.charset.Charset;
import java.nio.charset.CharsetDecoder;
import java.nio.charset.CharsetEncoder;
public class Solution{
public static void main(String[] args) throws Exception{
final CharsetEncoder messageEncoder = Charset.forName("ISO-8859-1").newEncoder();
String message = "TRANSACTION IGNORED";
String carrierName= "CARR00AB";
int messageLength = message.length()+carrierName.length()+8;
System.out.println(" --------Fill data---------");
ByteBuffer messageBuffer = ByteBuffer.allocate(4096);
messageBuffer.order(ByteOrder.BIG_ENDIAN);
messageBuffer.putInt(messageLength);
messageBuffer.put(messageEncoder.encode(CharBuffer.wrap(carrierName)));
messageBuffer.put(messageEncoder.encode(CharBuffer.wrap(message)));
messageBuffer.put((byte) 0x2b);
messageBuffer.flip();
System.out.println("------------Extract Data Approach 1--------");
CharsetDecoder messageDecoder = Charset.forName("ISO-8859-1").newDecoder();
int lengthField = messageBuffer.getInt();
System.out.println("lengthField="+lengthField);
int responseLength = lengthField - 12;
System.out.println("responseLength="+responseLength);
String messageDecoded= messageDecoder.decode(messageBuffer).toString();
System.out.println("messageDecoded="+messageDecoded);
String decodedCarrier = messageDecoded.substring(0, carrierName.length());
System.out.println("decodedCarrier="+ decodedCarrier);
String decodedBody = messageDecoded.substring(carrierName.length(), messageDecoded.length() - 1);
System.out.println("decodedBody="+decodedBody);
Assertions.assertThat(messageLength).isEqualTo(lengthField);
Assertions.assertThat(decodedBody).isEqualTo(message);
Assertions.assertThat(decodedBody).isEqualTo(message);
ByteBuffer messageBuffer2 = ByteBuffer.allocate(4096);
messageBuffer2.order(ByteOrder.BIG_ENDIAN);
messageBuffer2.putInt(messageLength);
messageBuffer2.put(messageEncoder.encode(CharBuffer.wrap(carrierName)));
messageBuffer2.put(messageEncoder.encode(CharBuffer.wrap(message)));
messageBuffer2.put((byte) 0x2b);
messageBuffer2.flip();
System.out.println("---------Extract Data Approach 2--------");
byte [] data = new byte[messageBuffer2.limit()];
messageBuffer2.get(data);
String dataString =new String(data, "ISO-8859-1");
System.out.println(dataString);
}
}
It works fine but then I thought to refactor it, Please see approach 2 in above code
byte [] data = new byte[messageBuffer.limit()];
messageBuffer.get(data);
String dataString =new String(data, "ISO-8859-1");
System.out.println(dataString);
Output= #CARR00ABTRANSACTION IGNORED+
Could you guys help me with explanation
why the integer is got missing in second approach while decoding ???
Is there any way to extract the integer in second approach??
Okay so you are trying to read an int from the Buffer which takes up 4 bits and then trying to get the whole data after reading 4 bits
What I have done is call messageBuffer2.clear(); after reading the int to resolve this issue. here is the full code
System.out.println(messageBuffer2.getInt());
byte[] data = new byte[messageBuffer2.limit()];
messageBuffer2.clear();
messageBuffer2.get(data);
String dataString = new String(data, StandardCharsets.ISO_8859_1);
System.out.println(dataString);
Output is:
35
#CARR0033TRANSACTION IGNORED+
Edit: So basically when you are calling clear it resets various variables and it also resets the position it's getting from and thats how it fixes it.
In C++, OpenCV has a nice FileStorage class that makes saving and loading Mat a breeze.
It's as easy as
//To save
FileStorage fs(outputFile, FileStorage::WRITE);
fs << "variable_name" << variable;
//To load
FileStorage fs(outputFile, FileStorage::READ);
fs["variable_name"] >> variable;
The file format is YAML.
I want to use a Mat that I create with a C++ program in Java, ideally, loading it from the saved YAML file. However, I cannot find an equivalent class to FileStorage in the Java bindings. Does one exist? If not, what alternatives do I have?
One possible solution is to write a YAML parser using a Java library such as yamlbeans or snakeyaml.
I choose to use yamlbeans because the default FileStorage encoding is YAML 1.0, and snakeyaml requires 1.1.
My C++ code
FileStorage fs(path, FileStorage::WRITE);
fs << "M" << variable;
Saves the following example YAML file
%YAML:1.0
codebook: !!opencv-matrix
rows: 1
cols: 3
dt: f
data: [ 1.03692314e+02, 1.82692322e+02, 8.46153831e+00 ]
After I remove the header, "%YAML:1.0", I can load it into Java using
import java.io.FileReader;
import java.io.FileNotFoundException;
import java.util.List;
import java.util.Map;
import java.util.Scanner;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import net.sourceforge.yamlbeans.YamlException;
import net.sourceforge.yamlbeans.YamlReader;
public class YamlMatLoader {
// This nested class specifies the expected variables in the file
// Mat cannot be used directly because it lacks rows and cols variables
protected static class MatStorage {
public int rows;
public int cols;
public String dt;
public List<String> data;
// The empty constructor is required by YamlReader
public MatStorage() {
}
public double[] getData() {
double[] dataOut = new double[data.size()];
for (int i = 0; i < dataOut.length; i++) {
dataOut[i] = Double.parseDouble(data.get(i));
}
return dataOut;
}
}
// Loading function
private Mat getMatYml(String path) {
try {
YamlReader reader = new YamlReader(new FileReader(path));
// Set the tag "opencv-matrix" to process as MatStorage
// I'm not sure why the tag is parsed as
// "tag:yaml.org,2002:opencv-matrix"
// rather than "opencv-matrix", but I determined this value by
// debugging
reader.getConfig().setClassTag("tag:yaml.org,2002:opencv-matrix", MatStorage.class);
// Read the string
Map map = (Map) reader.read();
// In file, the variable name for the Mat is "M"
MatStorage data = (MatStorage) map.get("M");
// Create a new Mat to hold the extracted data
Mat m = new Mat(data.rows, data.cols, CvType.CV_32FC1);
m.put(0, 0, data.getData());
return m;
} catch (FileNotFoundException | YamlException e) {
e.printStackTrace();
}
return null;
}
}
I'm trying to create a program, which reads CSV files from a directory, using a regex it parses each line of the file and displays the lines after matching the regex pattern.
For instance if this is the first line of my csv file
1997,Ford,E350,"ac, abs, moon",3000.00
my output should be
1997 Ford E350 ac, abs, moon 3000.00
I don't want to use any existing CSV libraries. I'm not good at regex, I've used a regex I found on net but its not working in my program
This is my source code, I'll be grateful if any one tells me where and what I"ve to modify in order to make my code work. Pls explain me.
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.nio.CharBuffer;
import java.nio.MappedByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.charset.Charset;
import java.nio.charset.CharsetDecoder;
import java.util.regex.Pattern;
import java.util.regex.Matcher;
public class RegexParser {
private static Charset charset = Charset.forName("UTF-8");
private static CharsetDecoder decoder = charset.newDecoder();
String pattern = "\"([^\"]*)\"|(?<=,|^)([^,]*)(?=,|$)";
void regexparser( CharBuffer cb)
{
Pattern linePattern = Pattern.compile(".*\r?\n");
Pattern csvpat = Pattern.compile(pattern);
Matcher lm = linePattern.matcher(cb);
Matcher pm = null;
while(lm.find())
{
CharSequence cs = lm.group();
if (pm==null)
pm = csvpat.matcher(cs);
else
pm.reset(cs);
if(pm.find())
{
System.out.println( cs);
}
if (lm.end() == cb.limit())
break;
}
}
public static void main(String[] args) throws IOException {
RegexParser rp = new RegexParser();
String folder = "Desktop/sample";
File dir = new File(folder);
File[] files = dir.listFiles();
for( File entry: files)
{
FileInputStream fin = new FileInputStream(entry);
FileChannel channel = fin.getChannel();
int cs = (int) channel.size();
MappedByteBuffer mbb = channel.map(FileChannel.MapMode.READ_ONLY, 0, cs);
CharBuffer cb = decoder.decode(mbb);
rp.regexparser(cb);
fin.close();
}
}
}
This is my input file
Year,Make,Model,Description,Price
1997,Ford,E350,"ac, abs, moon",3000.00
1999,Chevy,"Venture ""Extended Edition""","",4900.00
1999,Chevy,"Venture ""Extended Edition, Very Large""","",5000.00
1996,Jeep,Grand Cherokee,"MUST SELL!
air, moon roof, loaded",4799.00
I'm getting the same as output where is the problem in my code? why doesn't my regex have any impact on the code?
Using regexp seems "fancy", but with CSV files (at least in my opinion) is not worth it. For my parsing I use http://commons.apache.org/csv/. It has never let me down. :)
Anyway I've found the fix myself, thanks guys for your suggestion and help.
This was my initial code
if(pm.find()
System.out.println( cs);
Now changed this to
while(pm.find()
{
CharSequence css = pm.group();
//print css
}
Also I used a different Regex. I'm getting the desired output now.
You can try this : [ \t]*+"[^"\r\n]*+"[ \t]*+|[^,\r\n]*+ with this code :
try {
Pattern regex = Pattern.compile("[ \t]*+\"[^\"\r\n]*+\"[ \t]*+|[^,\r\n]*+", Pattern.CASE_INSENSITIVE | Pattern.UNICODE_CASE | Pattern.MULTILINE);
Matcher matcher = regex.matcher(subjectString);
while (matcher.find()) {
// Do actions
}
} catch (PatternSyntaxException ex) {
// Take care of errors
}
But yeah, if it's not a very critical demand do try to use something that already working : )
Take the advice offered and do not use regular expressions to parse a CSV file. The format is deceptively complicated in the way it can be used.
The following answer contains links to wikipedia and the RFC describing the CSV file format:
field size limitation of csv file
The following is the code that reads audio data from 2 audio input streams into a byte array.
import javax.sound.sampled.*;
import java.io.*;
class tester {
public static void main(String args[]) throws IOException {
try {
Clip clip_1 = AudioSystem.getClip();
AudioInputStream ais_1 = AudioSystem.getAudioInputStream( new File("D:\\UnderTest\\wavtester_1.wav") );
clip_1.open( ais_1 );
Clip clip_2 = AudioSystem.getClip();
AudioInputStream ais_2 = AudioSystem.getAudioInputStream( new File( "D:\\UnderTest\\wavtester_2.wav") );
clip_2.open( ais_2 );
byte arr_1[] = new byte[ais_1.available()]; // not the right way ?
byte arr_2[] = new byte[ais_2.available()];
ais_1.read( arr_1 );
ais_2.read( arr_2 );
} catch( Exception exc ) {
System.out.println( exc );
}
}
}
From the above code i have a byte array1,array2 for ais_1,ais_2 . Is there any way to concatenate these 2 byte arrays ( arr_1,arr_2 ) and then convert them back to an audio stream ? I want to concatenate 2 audio files.
Once you have the two byte arrays in hand (see my comment), you can concatenate them into a third array like this:
byte[] arr_combined = new byte[arr_1.length + arr_2.length];
System.arraycopy(arr_1, 0, arr_combined, 0, arr_1.length);
System.arraycopy(arr_2, 0, arr_combined, arr_1.length, arr_2.length);
Still not a complete answer, sorry, as this array is just the sample data - you still need to write out a header followed by the data. I didn't see any way to do this with the AudioSystem api.
Edit: try this:
Join two WAV files from Java?