I've build a Java library to get the interaction of java application with Microsoft Onenote.
I recently discovered that the API has changed:
For getting a specific Sectionthe url was
https://www.onenote.com/api/v1.0/me/notes/sections/SECTION_ID
And is now:
https://graph.microsoft.com/v1.0/me/onenote/sections/SECTION_ID
Both are "v1.0" while both have a different signature :
Onenote API:
{
"#odata.context": "https://www.onenote.com/api/v1.0/$metadata#me/notes/sections(parentNotebook(id,name,self),parentSectionGroup(id,name,self))/$entity",
"id": "SECTION_ID",
"self": "https://www.onenote.com/api/v1.0/me/notes/sections/SECTION_ID",
"createdTime": "2014-05-29T08:56:57.223Z",
"name": "Adresses",
"createdBy": "xxxx",
"lastModifiedBy": "xxxx",
"lastModifiedTime": "2014-06-10T12:55:22.41Z",
"isDefault": false,
Microsoft Graph API:
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#users('xxx%40live.com')/onenote/sections/$entity",
"id": "SECTION_ID",
"self": "https://graph.microsoft.com/v1.0/users/xxx#live.com/onenote/sections/SECTION_ID",
"createdDateTime": "2014-05-29T08:56:57.223Z",
"displayName": "Adresses",
"lastModifiedDateTime": "2014-06-10T12:55:22.41Z",
"isDefault": false,
"pagesUrl": "https://graph.microsoft.com/v1.0/users/xxx#live.com/onenote/sections/SECTION_ID/pages",
"createdBy": {
"user": {
"id": "USER_ID",
"displayName": "xxxx"
}
},
"lastModifiedBy": {
"user": {
"id": "USER_ID",
"displayName": "xxxx"
}
},
I wonder if I need to upgrade to Microsoft Graph API or if it safe to remain with the Onenote API.
I don't find any documentation about the migration. All the links pointing to the old url are now pointing to the new url...
We encourage you to move to Microsoft Graph API because it has a single auth token for multiple services like OneNote, OneDrive, SharePoint, etc. But you could stay on the OneNote API and we still fully support it
Related
https://pricing.twilio.com/v1/PhoneNumbers/Countries/{countryCode}
API is fetching country and give a response like
expected response =>
{
"url": "https://pricing.twilio.com/v1/PhoneNumbers/Countries/US",
"country": "United States",
"price_unit": "USD",
"phone_number_prices": [
{
"number_type": "local",
"base_price": "1.00",
"current_price": "1.00"
},
{
"number_type": "toll free",
"base_price": "2.00",
"current_price": "2.00"
}
],
"iso_country": "US"
}
but I am facing a problem while fetching country, it will give exception
Country country= Country.fetcher(countryCode).fetch();
exception =>
Unrecognized field "number_type" (class com.twilio.type.PhoneNumberPrice), not marked as ignorable (5 known properties: "basePrice", "type", "base_price", "currentPrice", "current_price"])
at [Source: (org.apache.http.conn.EofSensorInputStream); line: 1, column: 212] (through reference chain: com.twilio.rest.pricing.v1.phonenumber.Country["phone_number_prices"]-
>java.util.ArrayList[0]->com.twilio.type.PhoneNumberPrice["number_type"])
how to resolve this problem
Twilio developer evangelist here.
What version of the Twilio Java library are you using? This looks like an issue that was fixed back in March 2017.
I recommend you upgrade the version of the Twilio Java library you are using and if that doesn't solve the issue, raise an issue on the library repo.
I am running Nexus OSS version 3.29.2-02 and I am experiencing some weird behavior. I am building various images at a CI level (GitLab) and I am pushing them to a custom repository.
For the most part everything works OK and I have no issues tagging and pushing my produced images. Lately though, one of the projects that produces Docker images fails to push, with the following error:
$ TAGGED=${NEXUS_DOCKER_URL}/${BASE_IMAGE_NAME}:snapshot-MR${CI_MERGE_REQUEST_IID}
$ docker tag ${BASE_IMAGE_NAME}:latest ${TAGGED}
$ docker push ${TAGGED}
The push refers to repository [<custom-repository-url>/<image-name:tag>]
cade37b0f9c9: Preparing
578ec024f17c: Preparing
fe0b994190e8: Preparing
b24d08ca4359: Preparing
9a14db3b513b: Preparing
777b2c648970: Preparing
777b2c648970: Waiting
b24d08ca4359: Layer already exists
9a14db3b513b: Layer already exists
777b2c648970: Layer already exists
fe0b994190e8: Pushed
cade37b0f9c9: Pushed
578ec024f17c: Pushed
[DEPRECATION NOTICE] registry v2 schema1 support will be removed in an upcoming release. Please contact admins of the <custom-repository-url> registry NOW to avoid future disruption.
errors:
blob unknown: blob unknown to registry
blob unknown: blob unknown to registry
ERROR: Job failed: exit code 1
I have tried debugging this behavior as well as search online for a solution but I have yet to find anything. It seems that for some reason, this specific Docker image cannot be uploaded. I have tried the same procedure from both a local machine as well as from stateless CI builders and the behavior is consistent i.e. I was able to push it only once and then the process kept failing.
For reference my Dockerfile is the following:
FROM <custom-repository-url>/adoptopenjdk/openjdk11:jre-11.0.10_9-alpine
WORKDIR /home/app
COPY build/libs/email-service.jar application.jar
# Set the appropriate timezone
RUN apk add --no-cache tzdata && \
cp /usr/share/zoneinfo/America/New_York /etc/localtime && \
echo "America/New_York" > /etc/timezone
EXPOSE 8080
CMD java -jar ${OPTS} application.jar
Which is quite straightforward and does not hide anything complicated. I initially thought that the problem could have been attributed to using a proxied based image (i.e FROM) but this is done of several other projects without any issues.
I have tried also checking Nexus's logs and the only thing I see is the following:
2021-02-05 17:12:27,441+0000 ERROR [qtp1025847496-15765] ci-deploy org.sonatype.nexus.repository.docker.internal.orient.V2ManifestUtilImpl - Manifest refers to missing layer: sha256:66db482b5034f8eda0b18533d4eddb0012f4940bf3d348b08ac3bac8486bb2ee for: fts/marketing/email-service/snapshot-MR40 in repository RepositoryImpl$$EnhancerByGuice$$4d5af99c{type=hosted, format=docker, name='docker-hosted-s3'}
2021-02-05 17:12:27,443+0000 ERROR [qtp1025847496-15765] ci-deploy org.sonatype.nexus.repository.docker.internal.orient.V2ManifestUtilImpl - Manifest refers to missing layer: sha256:2ec25ba939258edb2e85293896c5126478d79fe416d3b60feb20426755bcea5a for: fts/marketing/email-service/snapshot-MR40 in repository RepositoryImpl$$EnhancerByGuice$$4d5af99c{type=hosted, format=docker, name='docker-hosted-s3'}
2021-02-05 17:12:27,445+0000 WARN [qtp1025847496-15765] ci-deploy org.sonatype.nexus.repository.docker.internal.V2Handlers - Error: PUT /v2/fts/marketing/email-service/manifests/snapshot-MR40: 400 - org.sonatype.nexus.repository.docker.internal.V2Exception: Invalid Manifest
So my question are:
What does this error really mean? I don't find it very useful:
errors:
blob unknown: blob unknown to registry
blob unknown: blob unknown to registry
What is really causing this behavior and how can I address the problem?
Note (not that it should make any difference), the image is a dockerized Micronaut application, using the latest version of the framework.
For reference, the output of docker inspect for said image is the following:
[{
"Id": "sha256:fec226a68e3b744fc792e47d3235e67f06b17883e60df52c8ae82c5a7ba9750f",
"RepoTags": [
"<custom-repository-url>/fts/marketing/email-service:mes-33-3",
"test-mes-33:latest"
],
"RepoDigests": [],
"Parent": "sha256:ddd8e2235b60d7636283097fc61e5971c32b3006ee52105e2a77e7d4ee7e709e",
"Comment": "",
"Created": "2021-02-06T21:06:59.987108458Z",
"Container": "8ab70692b75aac21d0866816aa52af5febf620744282d71a39dce55f81fe3e44",
"ContainerConfig": {
"Hostname": "8ab70692b75a",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"8080/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=en_US.UTF-8",
"LANGUAGE=en_US:en",
"LC_ALL=en_US.UTF-8",
"JAVA_VERSION=jdk-11.0.10+9",
"JAVA_HOME=/opt/java/openjdk"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"/bin/sh\" \"-c\" \"java -jar ${OPTS} application.jar\"]"
],
"Image": "sha256:ddd8e2235b60d7636283097fc61e5971c32b3006ee52105e2a77e7d4ee7e709e",
"Volumes": null,
"WorkingDir": "/home/app",
"Entrypoint": null,
"OnBuild": null,
"Labels": {}
},
"DockerVersion": "19.03.13",
"Author": "",
"Config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"8080/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=en_US.UTF-8",
"LANGUAGE=en_US:en",
"LC_ALL=en_US.UTF-8",
"JAVA_VERSION=jdk-11.0.10+9",
"JAVA_HOME=/opt/java/openjdk"
],
"Cmd": [
"/bin/sh",
"-c",
"java -jar ${OPTS} application.jar"
],
"Image": "sha256:ddd8e2235b60d7636283097fc61e5971c32b3006ee52105e2a77e7d4ee7e709e",
"Volumes": null,
"WorkingDir": "/home/app",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
},
"Architecture": "amd64",
"Os": "linux",
"Size": 220998577,
"VirtualSize": 220998577,
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/78561c2e477b099a547bead4ea17b677bb01376fc1ed1ce1cd942157d35c0329/diff:/var/lib/docker/overlay2/af8ac4feace0cbecd616e2a02850ec366715aaa5ac8ad143cb633f52b0f6fbe2/diff:/var/lib/docker/overlay2/211a8e68c833f664de5d304838b8cd98b8e5e790f79da8b8839a4d52d02a8d66/diff:/var/lib/docker/overlay2/cbc98e7274ff8266425aed31989066ff7c5f7a46d9334b84110fc57d8b1d942c/diff:/var/lib/docker/overlay2/c773dedbc53b81c2e68ad61811445c0377271db3af526dbf5a6aa6671d0b2b71/diff",
"MergedDir": "/var/lib/docker/overlay2/04240d9f745382480e52e04d8088de6f65a9ece0cd6e091953087f3d06fcc93c/merged",
"UpperDir": "/var/lib/docker/overlay2/04240d9f745382480e52e04d8088de6f65a9ece0cd6e091953087f3d06fcc93c/diff",
"WorkDir": "/var/lib/docker/overlay2/04240d9f745382480e52e04d8088de6f65a9ece0cd6e091953087f3d06fcc93c/work"
},
"Name": "overlay2"
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:777b2c648970480f50f5b4d0af8f9a8ea798eea43dbcf40ce4a8c7118736bdcf",
"sha256:9a14db3b513b928759670c6a9b15fd89a8ad9bf222c75e0998c21bcb04e25e48",
"sha256:b24d08ca43598c9ea44f73c3f5dfca2b4897c475b2cc480bac98cccc42dce10f",
"sha256:11d1fa1ad1ef523c60369c11b1096baf89c8d43afa53813e84c73d0926848598",
"sha256:30001f69fd3b3b08fdbf6d843e38d0a16d0e46e84923f92480ac88603c0eb680",
"sha256:b2d3c5f57d1d626a7501b8871f548fd7e1f7625fe05c1885c27ec415b14e9915"
]
},
"Metadata": {
"LastTagTime": "2021-02-06T23:08:30.440032169+02:00"
}
}]
A docker registry (in your case Nexus) throws that error whenever it encounters a missing/invalid layer in the image.
Nexus used to have difficulty with foreign layers but that shouldn't be a problem since you are running quite a recent version.
I would think that you only need to enable "foreign layer caching" in Nexus to get this working.
It'd be helpful to include the output of docker manifest inspect <custom-repository-url>/adoptopenjdk/openjdk11:jre-11.0.10_9-alpine and docker manifest inspect ${TAGGED}
Docker registry API spec
I produce the same Avro schema to one topic use different Confluent Registry sources. I get the error when I consume this topic:
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition XXXXX_XXXX_XXX-0 at offset 0. If needed, please seek past the record to continue consumption.
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id 7
Caused by: org.apache.kafka.common.errors.SerializationException: Could not find class XXXXX_XXXX_XXX specified in writer's schema whilst finding reader's schema for a SpecificRecord.
How to ignore differently Avro message-id?
Schema:
{
"type": "record",
"name": "XXXXX_XXXX_XXX",
"namespace": "aa.bb.cc.dd",
"fields": [
{
"name": "ACTION",
"type": [
"null",
"string"
],
"default":null,
"doc":"action"
},
{
"name": "EMAIL",
"type": [
"null",
"string"
],
"default":null,
"doc":"email address"
}
]
}
Produced command
{"Action": "A", "EMAIL": "xxxx#xxx.com"}
It's not possible to use different Registry urls in a producer and be able to consume them consistently.
The reason is that a different ID will be placed in the topic.
The Schema ID lookup cannot be skipped
If you had the used same registry, the same schema payload would always generate the same ID, which the consumer would then be able to use consistently to read messages
I am building a Dockerised record-playback system to help me record websites, so I can design scrapers aginst a local version rather than the real thing. This means that I do not swamp a website with automated requests, and has the added advantage that I do not need to be connected to the web to work.
I have used the Java-based WireMock internally, which records from a queue of site scrapes using Wget. I am using the WireMock API to read various pieces information from the mappings it records.
However, I have spotted from a mapping response that domain information does not seem to be recorded (except where it is in response headers by accident). See the following response from __admin/mappings:
{
"result": {
"ok": true,
"list": [
{
"id": "794d609f-99b9-376d-b6b8-04dab161c023",
"uuid": "794d609f-99b9-376d-b6b8-04dab161c023",
"request": {
"url": "/robots.txt",
"method": "GET"
},
"response": {
"status": 404,
"bodyFileName": "body-robots.txt-j9qqJ.txt",
"headers": {
"Server": "nginx/1.0.15",
"Date": "Wed, 04 Jan 2017 21:04:40 GMT",
"Content-Type": "text/html",
"Connection": "keep-alive"
}
}
},
{
"id": "e246fac2-f9ad-3799-b7b7-066941408b8b",
"uuid": "e246fac2-f9ad-3799-b7b7-066941408b8b",
"request": {
"url": "/about/careers/",
"method": "GET"
},
"response": {
"status": 200,
"bodyFileName": "body-about-careers-GhVqy.txt",
"headers": {
"Server": "nginx/1.0.15",
"Date": "Wed, 04 Jan 2017 21:04:35 GMT",
"Content-Type": "text/html",
"Last-Modified": "Wed, 04 Jan 2017 12:52:12 GMT",
"Connection": "keep-alive",
"X-CACHE-URI": "/about/careers/",
"Accept-Ranges": "bytes"
}
}
},
{
"id": "def378f5-a93c-333e-9663-edcd30c936d7",
"uuid": "def378f5-a93c-333e-9663-edcd30c936d7",
"request": {
"url": "/about/careers/feed/",
"method": "GET"
},
"response": {
"status": 200,
"bodyFileName": "body-careers-feed-Fd2fO.xml",
"headers": {
"Server": "nginx/1.0.15",
"Date": "Wed, 04 Jan 2017 21:04:45 GMT",
"Content-Type": "application/rss+xml; charset=UTF-8",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"X-Powered-By": "PHP/5.3.3",
"Vary": "Cookie",
"X-Pingback": "http://www.example.com/xmlrpc.php",
"Last-Modified": "Thu, 06 Jun 2013 14:01:52 GMT",
"ETag": "\"765fc03186b121a764133349f8b716df\"",
"X-Robots-Tag": "noindex, follow",
"Link": "<http://www.example.com/?p=2680>; rel=shortlink",
"X-CACHE-URI": "null cache"
}
}
},
{
"id": "616ca6d7-6e57-4c10-8b57-f6f3dabc0930",
"uuid": "616ca6d7-6e57-4c10-8b57-f6f3dabc0930",
"request": {
"method": "ANY"
},
"response": {
"status": 200,
"proxyBaseUrl": "http://www.example.com"
},
"priority": 10
}
]
}
}
The only clear recording of a URL is in the final entry against proxyBaseUrl, and given that I had to specify a URL in the console call I am now worried that if I record against a different domain, the domain that each one is from will be lost.
That would mean that in playback mode, WireMock would only be able to play back from one domain, and I'd have to restart it and point it to another cache in order to play back different sites. This is not workable for my use case, so is there a way around this problem?
(I have done a little work with Mountebank, and would be willing to switch to it, though I find WireMock generally easier to use. My limited understanding of Mountebank is that it suffers from the same single-domain problem, though I am happy to be corrected on that. I'd be happy to swap to any robust open-source API-driven recorder HTTP proxy, if dropping WireMock is the only way forward).
It's possible to serve WireMock stubs for multiple domains by adding a Host header criterion in your requests. Assuming your DNS/host file maps all the relevant domains to your WireMock server's IP, then this will cause it to behave like virtual hosting on an ordinary web server.
The main issue is that the recorder won't add the host header to your mappings so you'd need to do this yourself afterwards, or hack the recorder to do it on the fly.
I've been considering adding better support for this, so watch this space.
I'd also suggest checking out Hoverfly, which seems to solve this problem pretty well already.
Use Case
I am trying to connect jHipster generated application with ionic 2 app. I am using http session authentication
Overview of the issue
XMLHttpRequest cannot load http://192.168.0.14:8081//vconnect/api/countries. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://192.168.0.4:8100' is therefore not allowed access. The response had HTTP status code 403.
JHipster configuration, a .yo-rc.json file generated in the root folder
{
"generator-jhipster": {
"jhipsterVersion": "3.0.0",
"baseName": "vconnect",
"packageName": "com.zenfact.vconnect",
"packageFolder": "com/zenfact/vconnect",
"serverPort": "8080",
"authenticationType": "session",
"hibernateCache": "ehcache",
"clusteredHttpSession": "no",
"websocket": "no",
"databaseType": "sql",
"devDatabaseType": "postgresql",
"prodDatabaseType": "postgresql",
"searchEngine": "elasticsearch",
"buildTool": "maven",
"enableSocialSignIn": false,
"rememberMeKey": "559bbe3167552d040ba24d16506d389a7ba851c3",
"useSass": false,
"applicationType": "monolith",
"testFrameworks": [
"gatling"
],
"enableTranslation": true,
"nativeLanguage": "en",
"languages": [
"en",
"zh-cn",
"fr",
"hi",
"ja"
]
}
}
From my research I found that mobile apps do not play well with HTTP session authentication.
So I have recreated jhipster project with Oauth2 authentication which seems to work fine with ionic app that I am developing.
and the No 'Access-Control-Allow-Origin' header issue is solved by un-commenting cors in application.yml.