I followed these steps in the hopes of getting storage emulator on localhost working.
I am using Windows 8 RTM.
Downloaded Eclipse and copied it to Program Files.
Installed Java JDK 7.
Installed Azure SDK.
Installed Azure plugin for Eclipse.
Launched storage emulator from the "Start" screen.
Created a Java project.
Added External jars in the build path for Azure to this project.
Wrote this simple sample code:
import com.microsoft.windowsazure.services.blob.client.CloudBlobClient;
import com.microsoft.windowsazure.services.blob.client.CloudBlobContainer;
import com.microsoft.windowsazure.services.core.storage.CloudStorageAccount;
public class AzureStore {
public static final String storageConnectionString = "DefaultEndpointsProtocol=http;"
+ "UseDevelopmentStorage=true;"
+ "AccountName=devstoreaccount1;"
+ "BlobEndpoint=http://127.0.0.1:10000;"
+ "AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==";
public static void main(String[] args) throws Exception {
// Retrieve storage account from connection-string
CloudStorageAccount storageAccount = CloudStorageAccount
.parse(storageConnectionString);
// Create the blob client
CloudBlobClient blobClient = storageAccount.createCloudBlobClient();
// Get a reference to a container
// The container name must be lower case
CloudBlobContainer container = blobClient
.getContainerReference("tweet");
try {
// Create the container if it does not exist
System.out.println(container.createIfNotExist());
} catch (Exception e) {
e.printStackTrace();
}
}
}
It gives the following exception:
com.microsoft.windowsazure.services.core.storage.StorageException: The value for one of the HTTP headers is not in the correct format.
at com.microsoft.windowsazure.services.core.storage.StorageException.translateException(StorageException.java:104)
at com.microsoft.windowsazure.services.blob.client.CloudBlobContainer$2.execute(CloudBlobContainer.java:334)
at com.microsoft.windowsazure.services.blob.client.CloudBlobContainer$2.execute(CloudBlobContainer.java:291)
at com.microsoft.windowsazure.services.core.storage.utils.implementation.ExecutionEngine.executeWithRetry(ExecutionEngine.java:110)
at com.microsoft.windowsazure.services.blob.client.CloudBlobContainer.createIfNotExist(CloudBlobContainer.java:339)
at com.microsoft.windowsazure.services.blob.client.CloudBlobContainer.createIfNotExist(CloudBlobContainer.java:257)
at AzureStore.main(AzureStore.java:26)
I am confused at this point, as what might be wrong. Can someone help me?
I think the error is happening because of incorrect storage service version in the API. In your code you're trying to create a blob container in development storage. The "x-ms-version" request header value is sent as "2012-02-12" which though is the latest one but still not supported by the development storage. Development storage still supports "2011-08-18".
If you try your code against cloud storage, you should be able to create that blob container.
If you're only doing your development against development storage, one thing you could do is download the source code from GitHub (https://github.com/WindowsAzure/azure-sdk-for-java/downloads) and modify the following line of code in Constants.java
public static final String TARGET_STORAGE_VERSION = "2012-02-12";
to
public static final String TARGET_STORAGE_VERSION = "2011-08-18";
and compile the source code again. This may break some new functionality introduced in the latest service release (like asynchronous copy blobs etc.)
Other alternative is to wait out for the new SDK to come out and hope that the emulator in that version support the latest storage service version.
More about URI class
See if below works for you.
URI BlobEndPoint = new URI("http://127.0.0.1:10000/devstoreaccount1");
CloudBlobClient bClient = new CloudBlobClient(BlobEndPoint, new StorageCredentialsAccountAndKey(AccountName,
AccountSecurityKey));
Related
I'm trying to understand a comment that a colleague made. We're using testcontainers to create a fixture:
import org.testcontainers.containers.GenericContainer;
import org.testcontainers.utility.DockerImageName;
public class SalesforceFixture extends GenericContainer<SalesforceFixture> {
private static final String APPLICATION_NAME = "salesforce-emulator";
public SalesforceFixture() {
// super(ImageResolver.resolve(APPLICATION_NAME));
super(DockerImageName.parse("gcr.io/ad-selfserve/salesforce-emulator:latest"));
...
}
...
The commented code is what it used to be. The next line is my colleague's suggestion. And on that line he commented:
This is the part I don't know. The [ImageResolver] gets the specific version of the emulator, rather than the latest. You need a docker-info file for that though, which jib doesn't automatically generate (but I think it can).
This is what I know or have figured so far:
SalesforceFixture is a class that will be used by other projects to write tests. It spins up a container in Docker, running a service that emulates the real service's API. It's like a local version of the service that behaves enough like the real thing that if one writes code and tests using the fixture, it should work the same in production. (This is where my knowledge ends.)
I looked into ImageResolver—it seems to be a class we wrote that searches a filesystem for something:
public static String resolve(String applicationName, File... roots) {
Stream<File> searchPaths = Arrays.stream(roots).flatMap((value) -> {
return Stream.of(new File(value, "../" + applicationName), new File(value, applicationName));
});
Optional<File> buildFile = searchPaths.flatMap((searchFile) -> {
if (searchFile.exists()) {
File imageFile = new File(searchFile + File.separator + "/target/docker/image-name");
if (imageFile.exists()) {
return Stream.of(imageFile);
}
}
return Stream.empty();
}).findAny();
InputStream build = (InputStream)buildFile.map(ImageResolver::fileStream).orElseGet(() -> {
return searchClasspath(applicationName);
});
if (build != null) {
try {
return IOUtils.toString(build, Charset.defaultCharset()).trim();
} catch (IOException var6) {
throw new RuntimeException("An exception has occurred while reading build file", var6);
}
} else {
throw new RuntimeException("Could not resolve target image for application: " + applicationName);
}
}
But I'm confused. What filesystem? Like, what is the present working directory? My local computer, wherever I ran the Java program from? Or is this from within some container? (I don't think so.) Or maybe the directory structure inside a .jar file? Or somewhere in gcr.io?
What does he mean about a "specific version number" vs. "latest"? I mean, when I build this project, whatever it built is all I have. Isn't that equivalent to "latest"? In what case would an older version of an image be present? (That's what made me think of gcr.io.)
Or, does he mean, that in the project using this project's image, one will not be able to specify a version via Maven/pom.xml—it will always spin up the latest.
Sorry this is long, just trying to "show my work." Any hints welcome. I'll keep looking.
I can't comment on specifics of your own internal implementations, but ImageResolver seems to work on your local filesystem, e.g. it looks into your target/ directory and also touches the classpath. I can imagine this code was just written for resolving an actual image name (not an image), since it also returns a String.
Regarding latest, using a latest tag for a Docker image is generally considered an anti-pattern, so likely your colleague is commenting about this. Here is a random article from the web explaining some of the issues with latest tag:
https://vsupalov.com/docker-latest-tag/
Besides, I don't understand why you ask these questions which are very specific to your project here on SO rather than asking your colleague.
Like this question - I'm trying to load in an existing jfr file that has been recorded on another machine external to our organisation. I now want to deobfuscate the information, either as a plugin for JDK Mission Control, or as a utility for reading in a jfc file and writing out a de-obfuscated version.
My class does the relevant implementation of the API
public class JFRProcessor implements IParserExtension {
//impementation details below
And I have tested it (successfully) with the following
List<File> files = new ArrayList<>();
files.add(new File("/user/rafe/Input001.jfr"));
List<IParserExtension> extensions = new ArrayList<>();
extensions.add(new JFRProcessor());
IItemCollection events = JfrLoaderToolkit.loadEvents(files, extensions);
//write out to xml to validate the change
RecordingPrinter printer = new RecordingPrinter(new PrintWriter(new File("/user/rafe/Output0001.xml")), Verbosity.HIGH, false);
printer.print(events);
When I then try to export this as a jar, I have the fully qualified classname (com.extension.JFRProcessor) in the relevant META-INF/services/org.openjdk.jmc.flightrecorder.parser.IParserExtension file - and JDK Mission Control doesn't do anything with the plugin (when put in the drop-ins directory).
This was then verified by exporting the jar and in a separate project (with the exported jar in the build path):
ServiceLoader<IParserExtension> loader = ServiceLoader.load(IParserExtension.class,
IParserExtension.class.getClassLoader());
Another approach that I took was to write out the events:
I have also tried using the latest SNAPSHOT release of JDK Mission Control with the new Recordings class in org.openjdk.jmc.flightrecorder.writer.api but I am struggling to see how to get between the IItemCollection and any useful data to feed into the Recording instance that I'm trying to rewrite into.
final Recording rec = Recordings.newRecording("/user/rafe/Output-001.jfr");
events.forEach(event -> {
IType<IItem> type = event.getType();
rec.writeEvent(typedValue);
});
Any help would be appreciated for either approach - as I'm struggling to see how to use this without de-obfuscating the data first!
I am using the AWS Java SDK and trying to run some tests; getting:
Unable to load AWS credentials from the /AwsCredentials.properties file on the classpath
The credentials file # ~/.aws/ is correct per AWS specs; 777'd it to ensure no access issues.
I am not using any IDE plug-ins; per AWS docs, having a credentials file # ~/.aws/ should suffice. Anyone have this working with just the SDK installed? If I hard-code the file path into the ClasspathPropertiesFileCredentialsProvider() request it spits the error back with the path instead of the AwsCredentials.properties string, which doesn't exist anywhere (yes, tried making one of those in ~/.aws/ as well).
Thanks much for any insights, code is below straight from Amazon:
import com.amazonaws.auth.ClasspathPropertiesFileCredentialsProvider;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.sns.AmazonSNSClient;
import com.amazonaws.services.sns.model.PublishRequest;
import com.amazonaws.services.sns.model.PublishResult;
public class SNS {
public static void main(String[] args) {
AmazonSNSClient snsClient = new AmazonSNSClient(new ClasspathPropertiesFileCredentialsProvider());
snsClient.setRegion(Region.getRegion(Regions.US_EAST_1));
String msg = "this is a test";
PublishRequest publishRequest = new PublishRequest("my arn", msg);
PublishResult publishResult = snsClient.publish(publishRequest);
System.out.println("MessageId - " + publishResult.getMessageId());
}
}
If you use DefaultAWSCredentialsProviderChain instead of ClasspathPropertiesFileCredentialsProvider, it will automatically check various default locations for AWS credentials. (Documentation)
Have you verified that your $HOME environment variable is set for the process you are running? The AWS SDK relies on $HOME to determine the proper location of your .aws folder.
Well that didn't work the way I'd planned it; couldn't get the .aws path as a classpath (tried adding as an external class folder).
Ran the below to find the actual classpaths in my project:
public static void main (String args[]) {
ClassLoader cl = ClassLoader.getSystemClassLoader();
URL[] urls = ((URLClassLoader)cl).getURLs();
for(URL url: urls){
System.out.println(url.getFile());
}
}
and then dropped my AWS credentials into a new AwsCredentials.properties file in one of the dirs from above (I had one; the rest were jar files).
Changed the tag values in the file to "accessKey" and "secretKey" from what was there (aws_access_key, aws_secret_access_key) and it worked.
Thanks to everyone for their inputs.
Earlier I worked with Pentaho reports, where I can create report with pentaho report designer and deploy .prpt file onto into BI server. It will work fine.
Now I am looking for a solution where I can put .prpt file in a Java program and run just like jasper reports (.jrxml files). Because I need to integrate Pentaho reports with my web application.
I may be asking a very basic question. But I did not find the proper document on this. Please point me to some correct location and a sample code will be helpful.
Since the first link in the accepted answer doesn't seem to work anymore, people who are looking for examples might find this more useful: https://github.com/pentaho/pentaho-reporting/blob/master/engine/samples/source/org/pentaho/reporting/engine/classic/samples
The code in some of the samples is a bit convoluted, so i'm posting my own report generator class, which only contains the bare essentials for generating a PDF report:
public class ReportGenerator {
public byte[] generateReport(byte[] templateBytes, Map<String, Object> params) throws Exception {
ClassicEngineBoot.getInstance().start();
MasterReport reportData = loadTemplateDefinition(templateBytes);
addParametersToReport(params, reportData);
byte[] reportBytes = generateReport(reportData);
return reportBytes;
}
private MasterReport loadTemplateDefinition(byte[] templateBytes) throws Exception {
ResourceManager resourceManager = new ResourceManager();
Resource templateResource = resourceManager.createDirectly(templateBytes, MasterReport.class);
return (MasterReport) templateResource.getResource();
}
private void addParametersToReport(Map<String, Object> params, MasterReport reportData) {
if (params != null) {
for (String key : params.keySet()) {
reportData.getParameterValues().put(key, params.get(key));
}
}
}
private byte[] generateReport(MasterReport reportData) throws ReportProcessingException {
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
PdfOutputProcessor outputProcessor = new PdfOutputProcessor(reportData.getConfiguration(), outputStream, reportData.getResourceManager());
AbstractReportProcessor reportProcessor = null;
try {
reportProcessor = new PageableReportProcessor(reportData, outputProcessor);
reportProcessor.processReport();
} finally {
if (reportProcessor != null) {
reportProcessor.close();
}
}
return outputStream.toByteArray();
}
}
The generateReport method accepts the contents of a .prpt file in the templateBytes parameter, and a list of parameters needed to generate the report in the params parameter.
The byte array it returns contains the contents of a generated PDF report.
Also if you are using Maven for your application it is important to include all the necessary dependencies. I used the list i found here: http://wiki.pentaho.com/display/Reporting/How+to+integrate+report+designer+to+your+web+application, and in it i replaced all the pentaho-related library versions with version 6.1.0.1-196
I've embedded successfully the Pentaho Reporting Engine in my Java application. There is a tutorial with the necessary libraries and examples. The only thing you need to consider while starting is to use the same version of Pentaho SDK, Pentaho Reporting Engine and Pentaho Report Designer, to don't get datasource issues. If you don't want troubles about dependencies, you can download Pentaho Report Designer and drag and drop all the libraries into your web application (most of the issues come when you try to use pentaho charts, and they are solved in this way).
Official Pentaho Docs:
http://infocenter.pentaho.com/help/index.jsp?topic=%2Freporting_embedders_guide%2Ftopic_embedding_engine.html
Pentaho Reporting Classic Engine Core (better try with this first):
http://sourceforge.net/projects/jfreereport/files/01.%20Classic%20Engine/
Just import all the libraries in your IDE (I used Eclipse Helios), and use the example provided, it will work as a charm!. Then you can start to modify it depending on your needs. I suggest you to review how to handle the path for the reports.
final FacesContext context = FacesContext.getCurrentInstance();
ClassicEngineBoot.getInstance().start();
try {
// load report definition
ResourceManager manager = new ResourceManager();
manager.registerDefaults();
ExternalContext extContext = context.getExternalContext();
String reportPath = "file:" + extContext.getRealPath("name/name.prpt");
Resource res = manager.createDirectly(new URL(reportPath), MasterReport.class);
MasterReport report = (MasterReport) res.getResource();
................
................
httpServletResponse.setContentType("application/rtf");
httpServletResponse.setHeader("Content-Disposition", "attachment; filename=\"name.rtf\"");
RTFReportUtil.createRTF(report, httpServletResponse.getOutputStream());
FacesContext.getCurrentInstance().responseComplete();
} catch (ReportProcessingException ex) {
I'm working on a web-based application which would allow users to upload a Word document to Google Docs using the GData Java API.
( I came across this blog where I found out that I could actually use a byte array to upload a doc instead of using a File )
I'm using Netbeans + JDK 1.6
The relevant code in my servlet:
DocsService docsService = new DocsService("care.udhc.co.in");
try {
docsService.setUserCredentials("sbose78#gmail.com", "*******");
DocumentListEntry newDocument = new DocumentListEntry();
String s="hello bose";
byte byteData[]=s.getBytes();
// Load the byte array into a MediaSource
MediaByteArraySource mediaSource = new MediaByteArraySource(byteData, MediaType.fromFileName("bose.doc").getMimeType());
MediaContent content = new MediaContent();
content.setMediaSource(mediaSource);
content.setMimeType(new ContentType(mediaSource.getContentType()));
newDocument.setContent(content);
String gdocsFilename = new String("My Filename");
newDocument.setTitle(new PlainTextConstruct(gdocsFilename));
out.println("OK");
// Push it into Google Docs!!
DocumentListEntry uploadedRef = docsService.insert(new URL("https://docs.google.com/feeds/default/private/full/"), newDocument);
} catch(Exception e) {
out.println(e.toString());
} finally {
out.close();
}
When I run it locally, I encounter the following error:
com.google.gdata.util.InvalidEntryException: We're sorry, a server error occurred. Please try again. GDataInvalidEntryExceptionWe're sorry, a server error occurred. Please try again.
When i run the version deployed on the Internet ( Jelastic cloud ),
I get this:
java.lang.NoClassDefFoundError: com/google/gdata/data/extensions/QuotaBytesTotal
com.google.gdata.data.docs.MetadataEntry.declareExtensions(MetadataEntry.java:86)
com.google.gdata.data.ExtensionProfile.addDeclarations(ExtensionProfile.java:71)
com.google.gdata.data.BaseFeed.declareExtensions(BaseFeed.java:235)
com.google.gdata.client.docs.DocsService.declareExtensions(DocsService.java:171)
com.google.gdata.client.docs.DocsService.<init>(DocsService.java:108)
bose.google.UploadToDocs.processRequest(UploadToDocs.java:30)
bose.google.UploadToDocs.doGet(UploadToDocs.java:79)
javax.servlet.http.HttpServlet.service(HttpServlet.java:690)
javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
Please get me a workaround?
It seems like you are missing one of the required dependencies, probably gdata-core-1.0.jar.
Also, check this page for external dependencies: https://developers.google.com/gdata/articles/java_client_lib