I am using xugglu in java in order to switch the bitrate of the input MP3 file, storing it in a output file. I took one example I found on the net the loads the file to a reader and adds a writer as a listener. Does anyone know how can I then modify the bitrate?
Here's the code I've been using:
import com.xuggle.mediatool.IMediaReader;
import com.xuggle.mediatool.IMediaViewer;
import com.xuggle.mediatool.IMediaWriter;
import com.xuggle.mediatool.ToolFactory;
import org.slf4j.LoggerFactory;
public class TranscodingExample {
private static final String inputFilename = "/home/user/Desktop/file_changed.mp3";
private static final String outputFilename = "/home/user/Desktop/file_changed.flv";
public static void main(String[] args) {
// create a media reader
IMediaReader mediaReader =
ToolFactory.makeReader(inputFilename);
// create a media writer
IMediaWriter mediaWriter =
ToolFactory.makeWriter(outputFilename, mediaReader);
// add a writer to the reader, to create the output file
mediaReader.addListener(mediaWriter);
// create a media viewer with stats enabled
IMediaViewer mediaViewer = ToolFactory.makeViewer(true);
// add a viewer to the reader, to see the decoded media
mediaReader.addListener(mediaViewer);
// read and decode packets from the source file and
// and dispatch decoded audio and video to the writer
while (mediaReader.readPacket() == null) ;
}
}
EDIT-1:
I couldn't really get around this one, so I just used a linux command doing that inside the Java app. You can find a reference to the code here. The command I used was:
ffmpeg -i in.mp3 -b 112k out.mp3
It converts the mp3 to a new one of bitrate equal to 112k.
Take a look at IAudioResampler:
Used to resample IAudioSamples to different sample rates or number of channels.
Related
I am using the google text-to-speech API and I am trying to figure out how I'd be able to play the google response immediately rather than converting it to a mp3 file
public static void TTS(String word) throws IOException {
authExplicit();
try (
// Set the text input to be synthesized
SynthesisInput input = SynthesisInput.newBuilder().setText(word).build();
// Build the voice request, select the language code ("en-US") and the ssml voice gender
// ("neutral")
VoiceSelectionParams voice =
VoiceSelectionParams.newBuilder()
.setLanguageCode("en-US")
.setSsmlGender(SsmlVoiceGender.NEUTRAL)
.build();
// Select the type of audio file you want returned
AudioConfig audioConfig =
AudioConfig.newBuilder().setAudioEncoding(AudioEncoding.MP3).build();
// Perform the text-to-speech request on the text input with the selected voice parameters and
// audio file type
SynthesizeSpeechResponse response =
textToSpeechClient.synthesizeSpeech(input, voice, audioConfig);
// Get the audio contents from the response
ByteString audioContents = response.getAudioContent();
// HERE, I DO NOT WANT TO CONVERT TO MP3. I just want the audio played out.....
try (OutputStream out = new FileOutputStream("output.mp3")) {
out.write(audioContents.toByteArray());
System.out.println("Audio content written to file \"output.mp3\"");
}
}
}
i fixed this, i added a jplayer to my dependencies then i replaced then i replaced the mp3 part with this:
BufferedInputStream inputStream = new BufferedInputStream(new ByteArrayInputStream(audioContents.toByteArray()));
Player player = new Player(inputStream); //player from jplayer
player.play();
I'm working on a .opus music library software which converts audio/video files to .opus files and tags them with metadata automatically.
Previous versions of the program have saved the album art as binary data apparently as revealed by exiftool.
The thing is that when I run the command to output data as binary using the -b option, the entire thing is in binary seemingly. I'm not sure how to get the program to parse it. I was kind of expecting an entry like Picture : 11010010101101101011....
The output looks similar to this though:
How can I parse the picture data so I can reconstruct the image for newer versions of the program? (I'm using Java8_171 on Kubuntu 18.04)
It looks like you're trying to open the raw bytes in a text editor, which will of course give you gobble-dee-gook since those raw bytes do not represent characters that can be displayed by any text editor. I can see from your output from exiftool that you are able to know the length of the image in bytes. Providing you know the beginning byte position in the file, this should make your task relatively easy with a little bit of Java code. If you can get the starting position of the image inside your file, you should be able to do something like:
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.io.*;
public class SaveImage {
public static void main(String[] args) throws IOException {
byte[] imageBytes;
try (RandomAccessFile binaryReader =
new RandomAccessFile("your-file.xxx", "r")) {
int dataLength = 0; // Assign this the byte length shown in your
// post instead of zero
int startPos = 0; // I assume you can find this somehow.
// If it's not at the beginning
// change it accordingly.
imageBytes = new byte[dataLength];
binaryReader.read(imageBytes, startPos, dataLength);
}
try (InputStream in = new ByteArrayInputStream(imageBytes)) {
BufferedImage bImageFromConvert = ImageIO.read(in);
ImageIO.write(bImageFromConvert,
"jpg", // or whatever file format is appropriate
new File("/path/to/your/file.jpg"));
}
}
}
I have a small java program that reads a given file with data and converts it to a csv file.
I've been trying to use the arrow symbols: ↑, ↓, → and ← (Alt+24 to 27) but unless the program is run from within Netbeans (Using F6), they will always come out as '?' in the resulting csv file.
I have tried using the unicodes, eg "\u2190" but it makes no difference.
Anyone know why this is happening?
As requested, here is a sample code that gives the same issue. This wont work when run using the .jar file, just creating a csv file containing '?', however running from within Netbeans works.
import java.io.FileNotFoundException;
import java.io.PrintWriter;
public class Sample {
String fileOutName = "testresult.csv";
/**
* #param args the command line arguments
*/
public static void main(String[] args) throws FileNotFoundException {
Sample test = new Sample();
test.saveTheArrow();
}
public void saveTheArrow() {
try (PrintWriter outputStream = new PrintWriter(fileOutName)) {
outputStream.print("←");
outputStream.close();
}
catch (FileNotFoundException e) {
// Do nothing
}
}
}
new PrintWriter(fileOutName) uses the default charset of the JVM - you may have different defaults in Netbeans and in the console.
Google Sheet uses UTF_8 according to this thread so it would make sense to save your file using that character set:
Files.write(Paths.get("testresult.csv"), "←".getBytes(UTF_8));
Using the "<-" character in your editor is for sure not the desired byte 0x27.
Use
outputStream.print( new String( new byte[] { 0x27}, StandardCharsets.US_ASCII);
I have an existing PDF from which I want to retrieve images
NOTE:
In the Documentation, this is the RESULT variable
public static final String RESULT = "results/part4/chapter15/Img%s.%s";
I am not getting why this image is needed?I just want to extract the images from my PDF file
So Now when I use MyImageRenderListener listener = new MyImageRenderListener(RESULT);
I am getting the error:
results\part4\chapter15\Img16.jpg (The system
cannot find the path specified)
This is the code that I am having.
package part4.chapter15;
import java.io.IOException;
import com.itextpdf.text.DocumentException;
import com.itextpdf.text.pdf.PdfReader;
import com.itextpdf.text.pdf.parser.PdfReaderContentParser;
/**
* Extracts images from a PDF file.
*/
public class ExtractImages {
/** The new document to which we've added a border rectangle. */
public static final String RESOURCE = "resources/pdfs/samplefile.pdf";
public static final String RESULT = "results/part4/chapter15/Img%s.%s";
/**
* Parses a PDF and extracts all the images.
* #param src the source PDF
* #param dest the resulting PDF
*/
public void extractImages(String filename)
throws IOException, DocumentException {
PdfReader reader = new PdfReader(filename);
PdfReaderContentParser parser = new PdfReaderContentParser(reader);
MyImageRenderListener listener = new MyImageRenderListener(RESULT);
for (int i = 1; i <= reader.getNumberOfPages(); i++) {
parser.processContent(i, listener);
}
reader.close();
}
/**
* Main method.
* #param args no arguments needed
* #throws DocumentException
* #throws IOException
*/
public static void main(String[] args) throws IOException, DocumentException {
new ExtractImages().extractImages(RESOURCE);
}
}
You have two questions and the answer to the first question is the key to the answer of the second.
Question 1:
You refer to:
public static final String RESULT = "results/part4/chapter15/Img%s.%s";
And you ask: why is this image needed?
That question is wrong, because Img%s.%s is not a filename of an image, it's a pattern of the filename of an image. While parsing, iText will detect images in the PDF. These images are stored in numbered objects (e.g. object 16) and these images can be exported in different formats (e.g. jpg, png,...).
Suppose that an image is stored in object 16 and that this image is a jpg, then the pattern will resolve to Img16.jpg.
Question 2:
Why do I get an error:
results\part4\chapter15\Img16.jpg (The system cannot find the path specified)
In your PDF, there's a jpg stored in object 16. You are asking iText to store that image using this path: results\part4\chapter15\Img16.jpg (as explained in my answer to Question 1). However: you working directory doesn't have the subdirectories results\part4\chapter15\, hence an IOException (or a FileNotFoundException?) is thrown.
What is the general problem?
You have copy/pasted the ExtractImages example I wrote for my book "iText in Action - Second Edition", but:
You didn't read that book, so you have no idea what that code is supposed to do.
You aren't telling the readers on StackOverflow that this example depends on the MyImageRenderer class, which is where all the magic happens.
How can you solve your problem?
Option 1:
Change RESULT like this:
public static final String RESULT = "Img%s.%s";
Now the images will be stored in your working directory.
Option 2:
Adapt the MyImageRenderer class, more specifically this method:
public void renderImage(ImageRenderInfo renderInfo) {
try {
String filename;
FileOutputStream os;
PdfImageObject image = renderInfo.getImage();
if (image == null) return;
filename = String.format(path,
renderInfo.getRef().getNumber(), image.getFileType());
os = new FileOutputStream(filename);
os.write(image.getImageAsBytes());
os.flush();
os.close();
} catch (IOException e) {
System.out.println(e.getMessage());
}
}
iText calls this class whenever an image is encountered. It passed an ImageRenderInfo to this method that contains plenty of information about that image.
In this implementation, we store the image bytes as a file. This is how we create the path to that file:
String.format(path,
renderInfo.getRef().getNumber(), image.getFileType())
As you can see, the pattern stored in RESULT is used in such a way that the first occurrence of %s is replaced with a number and the second occurrence with a file extension.
You could easily adapt this method so that it stores the images as byte[] in a List if that is what you want.
I am currently trying to write a jukebox-like application in Java that is able to play any audio source possible, but encountered some difficulties when trying to play radio streams.
For playback I use JLayer from JavaZoom, that works fine as long as the target is a direct media file or a direct media stream (I can play PCM, MP3 and OGG just fine). However I encounter difficulties when trying to play radio streams which either contain pre-media data like a m3u/pls file (which I could fix by adding a detection beforehand), or data that is streamed on port 80 while a web-page exists at the same location and the media transmitted depends on the type of request. In the later case, whenever I try to stream the media, I instead get the HTML data.
Example link of a stream that is hidden behind a web-page: http://stream.t-n-media.de:8030
This is playable in VLC, but if you put it into a browser or my application you'll receive an HTML file.
Is there:
A ready-made, free solution that I could use in place of JLayer? Preferably open source so I can study it?
A tutorial that can help me to write a solution on my own?
Or can someone give me an example on how to properly detect/request a media stream?
Thanks in advance!
import java.io.*;
import java.net.*;
import javax.sound.sampled.*;
import javax.sound.midi.*;
/**
* This class plays sounds streaming from a URL: it does not have to preload
* the entire sound into memory before playing it. It is a command-line
* application with no gui. It includes code to convert ULAW and ALAW
* audio formats to PCM so they can be played. Use the -m command-line option
* before MIDI files.
*/
public class PlaySoundStream {
// Create a URL from the command-line argument and pass it to the
// right static method depending on the presence of the -m (MIDI) option.
public static void main(String[ ] args) throws Exception {
if (args[0].equals("-m")) streamMidiSequence(new URL(args[1]));
else streamSampledAudio(new URL(args[0]));
// Exit explicitly.
// This is needed because the audio system starts background threads.
System.exit(0);
}
/** Read sampled audio data from the specified URL and play it */
public static void streamSampledAudio(URL url)
throws IOException, UnsupportedAudioFileException,
LineUnavailableException
{
AudioInputStream ain = null; // We read audio data from here
SourceDataLine line = null; // And write it here.
try {
// Get an audio input stream from the URL
ain=AudioSystem.getAudioInputStream(url);
// Get information about the format of the stream
AudioFormat format = ain.getFormat( );
DataLine.Info info=new DataLine.Info(SourceDataLine.class,format);
// If the format is not supported directly (i.e. if it is not PCM
// encoded), then try to transcode it to PCM.
if (!AudioSystem.isLineSupported(info)) {
// This is the PCM format we want to transcode to.
// The parameters here are audio format details that you
// shouldn't need to understand for casual use.
AudioFormat pcm =
new AudioFormat(format.getSampleRate( ), 16,
format.getChannels( ), true, false);
// Get a wrapper stream around the input stream that does the
// transcoding for us.
ain = AudioSystem.getAudioInputStream(pcm, ain);
// Update the format and info variables for the transcoded data
format = ain.getFormat( );
info = new DataLine.Info(SourceDataLine.class, format);
}
// Open the line through which we'll play the streaming audio.
line = (SourceDataLine) AudioSystem.getLine(info);
line.open(format);
// Allocate a buffer for reading from the input stream and writing
// to the line. Make it large enough to hold 4k audio frames.
// Note that the SourceDataLine also has its own internal buffer.
int framesize = format.getFrameSize( );
byte[ ] buffer = new byte[4 * 1024 * framesize]; // the buffer
int numbytes = 0; // how many bytes
// We haven't started the line yet.
boolean started = false;
for(;;) { // We'll exit the loop when we reach the end of stream
// First, read some bytes from the input stream.
int bytesread=ain.read(buffer,numbytes,buffer.length-numbytes);
// If there were no more bytes to read, we're done.
if (bytesread == -1) break;
numbytes += bytesread;
// Now that we've got some audio data to write to the line,
// start the line, so it will play that data as we write it.
if (!started) {
line.start( );
started = true;
}
// We must write bytes to the line in an integer multiple of
// the framesize. So figure out how many bytes we'll write.
int bytestowrite = (numbytes/framesize)*framesize;
// Now write the bytes. The line will buffer them and play
// them. This call will block until all bytes are written.
line.write(buffer, 0, bytestowrite);
// If we didn't have an integer multiple of the frame size,
// then copy the remaining bytes to the start of the buffer.
int remaining = numbytes - bytestowrite;
if (remaining > 0)
System.arraycopy(buffer,bytestowrite,buffer,0,remaining);
numbytes = remaining;
}
// Now block until all buffered sound finishes playing.
line.drain( );
}
finally { // Always relinquish the resources we use
if (line != null) line.close( );
if (ain != null) ain.close( );
}
}
// A MIDI protocol constant that isn't defined by javax.sound.midi
public static final int END_OF_TRACK = 47;
/* MIDI or RMF data from the specified URL and play it */
public static void streamMidiSequence(URL url)
throws IOException, InvalidMidiDataException, MidiUnavailableException
{
Sequencer sequencer=null; // Converts a Sequence to MIDI events
Synthesizer synthesizer=null; // Plays notes in response to MIDI events
try {
// Create, open, and connect a Sequencer and Synthesizer
// They are closed in the finally block at the end of this method.
sequencer = MidiSystem.getSequencer( );
sequencer.open( );
synthesizer = MidiSystem.getSynthesizer( );
synthesizer.open( );
sequencer.getTransmitter( ).setReceiver(synthesizer.getReceiver( ));
// Specify the InputStream to stream the sequence from
sequencer.setSequence(url.openStream( ));
// This is an arbitrary object used with wait and notify to
// prevent the method from returning before the music finishes
final Object lock = new Object( );
// Register a listener to make the method exit when the stream is
// done. See Object.wait( ) and Object.notify( )
sequencer.addMetaEventListener(new MetaEventListener( ) {
public void meta(MetaMessage e) {
if (e.getType( ) == END_OF_TRACK) {
synchronized(lock) {
lock.notify( );
}
}
}
});
// Start playing the music
sequencer.start( );
// Now block until the listener above notifies us that we're done.
synchronized(lock) {
while(sequencer.isRunning( )) {
try { lock.wait( ); } catch(InterruptedException e) { }
}
}
}
finally {
// Always relinquish the sequencer, so others can use it.
if (sequencer != null) sequencer.close( );
if (synthesizer != null) synthesizer.close( );
}
}
}
I have used this piece of code in one of my projects that deal with Audio streaming and was working just fine.
Furthermore, you can see similar examples here:
Java Audio Example
Just reading the javadoc of AudioSystem give me an idea.
There is an other signature for getAudioInputStream: you can give it an InputStream instead of a URL.
So, try to manage to get the input stream by yourself and add the needed headers so that you get the stream instead the html content:
URLConnection uc = url.openConnection();
uc.setRequestProperty("<header name here>", "<header value here>");
InputStream in = uc.getInputStream();
ain=AudioSystem.getAudioInputStream(in);
Hope this help.
I know this answer comes late, but I had the same issue: I wanted to play MP3 and AAC audio and also wanted the user to insert PLS/M3U links. Here is what I did:
First I tried to parse the type by using the simple file name:
import de.webradio.enumerations.FileExtension;
import java.net.URL;
public class FileExtensionParser {
/**
*Parses a file extension
* #param filenameUrl the url
* #return the filename. if filename cannot be determined by file extension, Apache Tika parses by live detection
*/
public FileExtension parseFileExtension(URL filenameUrl) {
String filename = filenameUrl.toString();
if (filename.endsWith(".mp3")) {
return FileExtension.MP3;
} else if (filename.endsWith(".m3u") || filename.endsWith(".m3u8")) {
return FileExtension.M3U;
} else if (filename.endsWith(".aac")) {
return FileExtension.AAC;
} else if(filename.endsWith((".pls"))) {
return FileExtension.PLS;
}
URLTypeParser parser = new URLTypeParser();
return parser.parseByContentDetection(filenameUrl);
}
}
If that fails, I use Apache Tika to do a kind of live detection:
public class URLTypeParser {
/** This class uses Apache Tika to parse an URL using her content
*
* #param url the webstream url
* #return the detected file encoding: MP3, AAC or unsupported
*/
public FileExtension parseByContentDetection(URL url) {
try {
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
InputStream in = connection.getInputStream();
BodyContentHandler handler = new BodyContentHandler();
AudioParser parser = new AudioParser();
Metadata metadata = new Metadata();
parser.parse(in, handler, metadata);
return parseMediaType(metadata);
} catch (IOException e) {
e.printStackTrace();
} catch (TikaException e) {
e.printStackTrace();
} catch (SAXException e) {
e.printStackTrace();
}
return FileExtension.UNSUPPORTED_TYPE;
}
private FileExtension parseMediaType(Metadata metadata) {
String parsedMediaType = metadata.get("encoding");
if (parsedMediaType.equalsIgnoreCase("aac")) {
return FileExtension.AAC;
} else if (parsedMediaType.equalsIgnoreCase("mpeg1l3")) {
return FileExtension.MP3;
}
return FileExtension.UNSUPPORTED_TYPE;
}
}
This will also solve the HTML problem, since the method will return FileExtension.UNSUPPORTED for HTML content.
I combined this classes together with a factory pattern and it works fine. The live detection takes only about two seconds.
I don't think that this will help you anymore but since I struggled almost three weeks I wanted to provide a working answer. You can see the whole project at github: https://github.com/Seppl2202/webradio