I am running Nexus OSS version 3.29.2-02 and I am experiencing some weird behavior. I am building various images at a CI level (GitLab) and I am pushing them to a custom repository.
For the most part everything works OK and I have no issues tagging and pushing my produced images. Lately though, one of the projects that produces Docker images fails to push, with the following error:
$ TAGGED=${NEXUS_DOCKER_URL}/${BASE_IMAGE_NAME}:snapshot-MR${CI_MERGE_REQUEST_IID}
$ docker tag ${BASE_IMAGE_NAME}:latest ${TAGGED}
$ docker push ${TAGGED}
The push refers to repository [<custom-repository-url>/<image-name:tag>]
cade37b0f9c9: Preparing
578ec024f17c: Preparing
fe0b994190e8: Preparing
b24d08ca4359: Preparing
9a14db3b513b: Preparing
777b2c648970: Preparing
777b2c648970: Waiting
b24d08ca4359: Layer already exists
9a14db3b513b: Layer already exists
777b2c648970: Layer already exists
fe0b994190e8: Pushed
cade37b0f9c9: Pushed
578ec024f17c: Pushed
[DEPRECATION NOTICE] registry v2 schema1 support will be removed in an upcoming release. Please contact admins of the <custom-repository-url> registry NOW to avoid future disruption.
errors:
blob unknown: blob unknown to registry
blob unknown: blob unknown to registry
ERROR: Job failed: exit code 1
I have tried debugging this behavior as well as search online for a solution but I have yet to find anything. It seems that for some reason, this specific Docker image cannot be uploaded. I have tried the same procedure from both a local machine as well as from stateless CI builders and the behavior is consistent i.e. I was able to push it only once and then the process kept failing.
For reference my Dockerfile is the following:
FROM <custom-repository-url>/adoptopenjdk/openjdk11:jre-11.0.10_9-alpine
WORKDIR /home/app
COPY build/libs/email-service.jar application.jar
# Set the appropriate timezone
RUN apk add --no-cache tzdata && \
cp /usr/share/zoneinfo/America/New_York /etc/localtime && \
echo "America/New_York" > /etc/timezone
EXPOSE 8080
CMD java -jar ${OPTS} application.jar
Which is quite straightforward and does not hide anything complicated. I initially thought that the problem could have been attributed to using a proxied based image (i.e FROM) but this is done of several other projects without any issues.
I have tried also checking Nexus's logs and the only thing I see is the following:
2021-02-05 17:12:27,441+0000 ERROR [qtp1025847496-15765] ci-deploy org.sonatype.nexus.repository.docker.internal.orient.V2ManifestUtilImpl - Manifest refers to missing layer: sha256:66db482b5034f8eda0b18533d4eddb0012f4940bf3d348b08ac3bac8486bb2ee for: fts/marketing/email-service/snapshot-MR40 in repository RepositoryImpl$$EnhancerByGuice$$4d5af99c{type=hosted, format=docker, name='docker-hosted-s3'}
2021-02-05 17:12:27,443+0000 ERROR [qtp1025847496-15765] ci-deploy org.sonatype.nexus.repository.docker.internal.orient.V2ManifestUtilImpl - Manifest refers to missing layer: sha256:2ec25ba939258edb2e85293896c5126478d79fe416d3b60feb20426755bcea5a for: fts/marketing/email-service/snapshot-MR40 in repository RepositoryImpl$$EnhancerByGuice$$4d5af99c{type=hosted, format=docker, name='docker-hosted-s3'}
2021-02-05 17:12:27,445+0000 WARN [qtp1025847496-15765] ci-deploy org.sonatype.nexus.repository.docker.internal.V2Handlers - Error: PUT /v2/fts/marketing/email-service/manifests/snapshot-MR40: 400 - org.sonatype.nexus.repository.docker.internal.V2Exception: Invalid Manifest
So my question are:
What does this error really mean? I don't find it very useful:
errors:
blob unknown: blob unknown to registry
blob unknown: blob unknown to registry
What is really causing this behavior and how can I address the problem?
Note (not that it should make any difference), the image is a dockerized Micronaut application, using the latest version of the framework.
For reference, the output of docker inspect for said image is the following:
[{
"Id": "sha256:fec226a68e3b744fc792e47d3235e67f06b17883e60df52c8ae82c5a7ba9750f",
"RepoTags": [
"<custom-repository-url>/fts/marketing/email-service:mes-33-3",
"test-mes-33:latest"
],
"RepoDigests": [],
"Parent": "sha256:ddd8e2235b60d7636283097fc61e5971c32b3006ee52105e2a77e7d4ee7e709e",
"Comment": "",
"Created": "2021-02-06T21:06:59.987108458Z",
"Container": "8ab70692b75aac21d0866816aa52af5febf620744282d71a39dce55f81fe3e44",
"ContainerConfig": {
"Hostname": "8ab70692b75a",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"8080/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=en_US.UTF-8",
"LANGUAGE=en_US:en",
"LC_ALL=en_US.UTF-8",
"JAVA_VERSION=jdk-11.0.10+9",
"JAVA_HOME=/opt/java/openjdk"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"/bin/sh\" \"-c\" \"java -jar ${OPTS} application.jar\"]"
],
"Image": "sha256:ddd8e2235b60d7636283097fc61e5971c32b3006ee52105e2a77e7d4ee7e709e",
"Volumes": null,
"WorkingDir": "/home/app",
"Entrypoint": null,
"OnBuild": null,
"Labels": {}
},
"DockerVersion": "19.03.13",
"Author": "",
"Config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"8080/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=en_US.UTF-8",
"LANGUAGE=en_US:en",
"LC_ALL=en_US.UTF-8",
"JAVA_VERSION=jdk-11.0.10+9",
"JAVA_HOME=/opt/java/openjdk"
],
"Cmd": [
"/bin/sh",
"-c",
"java -jar ${OPTS} application.jar"
],
"Image": "sha256:ddd8e2235b60d7636283097fc61e5971c32b3006ee52105e2a77e7d4ee7e709e",
"Volumes": null,
"WorkingDir": "/home/app",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
},
"Architecture": "amd64",
"Os": "linux",
"Size": 220998577,
"VirtualSize": 220998577,
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/78561c2e477b099a547bead4ea17b677bb01376fc1ed1ce1cd942157d35c0329/diff:/var/lib/docker/overlay2/af8ac4feace0cbecd616e2a02850ec366715aaa5ac8ad143cb633f52b0f6fbe2/diff:/var/lib/docker/overlay2/211a8e68c833f664de5d304838b8cd98b8e5e790f79da8b8839a4d52d02a8d66/diff:/var/lib/docker/overlay2/cbc98e7274ff8266425aed31989066ff7c5f7a46d9334b84110fc57d8b1d942c/diff:/var/lib/docker/overlay2/c773dedbc53b81c2e68ad61811445c0377271db3af526dbf5a6aa6671d0b2b71/diff",
"MergedDir": "/var/lib/docker/overlay2/04240d9f745382480e52e04d8088de6f65a9ece0cd6e091953087f3d06fcc93c/merged",
"UpperDir": "/var/lib/docker/overlay2/04240d9f745382480e52e04d8088de6f65a9ece0cd6e091953087f3d06fcc93c/diff",
"WorkDir": "/var/lib/docker/overlay2/04240d9f745382480e52e04d8088de6f65a9ece0cd6e091953087f3d06fcc93c/work"
},
"Name": "overlay2"
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:777b2c648970480f50f5b4d0af8f9a8ea798eea43dbcf40ce4a8c7118736bdcf",
"sha256:9a14db3b513b928759670c6a9b15fd89a8ad9bf222c75e0998c21bcb04e25e48",
"sha256:b24d08ca43598c9ea44f73c3f5dfca2b4897c475b2cc480bac98cccc42dce10f",
"sha256:11d1fa1ad1ef523c60369c11b1096baf89c8d43afa53813e84c73d0926848598",
"sha256:30001f69fd3b3b08fdbf6d843e38d0a16d0e46e84923f92480ac88603c0eb680",
"sha256:b2d3c5f57d1d626a7501b8871f548fd7e1f7625fe05c1885c27ec415b14e9915"
]
},
"Metadata": {
"LastTagTime": "2021-02-06T23:08:30.440032169+02:00"
}
}]
A docker registry (in your case Nexus) throws that error whenever it encounters a missing/invalid layer in the image.
Nexus used to have difficulty with foreign layers but that shouldn't be a problem since you are running quite a recent version.
I would think that you only need to enable "foreign layer caching" in Nexus to get this working.
It'd be helpful to include the output of docker manifest inspect <custom-repository-url>/adoptopenjdk/openjdk11:jre-11.0.10_9-alpine and docker manifest inspect ${TAGGED}
Docker registry API spec
Related
I'm create an application using Jhipster stack and development on Windows 10, here is application config
application {
config {
baseName xxx
applicationType monolith
packageName com.xxx
authenticationType jwt
prodDatabaseType mysql
devDatabaseType mysql
buildTool gradle
cacheProvider redis
clientFramework angularX
serverPort 8090
websocket spring-websocket
}
entities *
}
The problem is I can't even login to the default account or register the new one.
I tried to find the solution and got that someone put -Djdk.io.File.enableADS=true into VM options will work but I'm not. Especially, this error only occur on Windows, I tried to run this app without any modifier on Macos and it's working normally. But Macos is not my primary development enviroment then I have to make it work on Windows, thanks
Jhipster info:
<summary>.yo-rc.json file</summary>
{
"generator-jhipster": {
"applicationIndex": 0,
"applicationType": "monolith",
"authenticationType": "jwt",
"baseName": "xxx",
"blueprints": [],
"buildTool": "gradle",
"cacheProvider": "redis",
"clientFramework": "angularX",
"clientPackageManager": "npm",
"clientTheme": "none",
"clientThemeVariant": "",
"creationTimestamp": 1652962669893,
"databaseType": "sql",
"devDatabaseType": "mysql",
"devServerPort": 4200,
"dtoSuffix": "DTO",
"enableGradleEnterprise": false,
"enableHibernateCache": true,
"enableSwaggerCodegen": false,
"enableTranslation": true,
"entities": ["Movie", "Actor", "Category", "Country", "Manufacturer", "Link"],
"entitySuffix": "",
"gradleEnterpriseHost": "",
"jhiPrefix": "jhi",
"jhipsterVersion": "7.8.1",
"jwtSecretKey": "YourJWTSecretKeyWasReplacedByThisMeaninglessTextByTheJHipsterInfoCommandForObviousSecurityReasons",
"languages": ["en"],
"lastLiquibaseTimestamp": 1652963029000,
"messageBroker": false,
"nativeLanguage": "en",
"otherModules": [],
"packageFolder": "com/xxx",
"packageName": "com.xxx",
"pages": [],
"prodDatabaseType": "mysql",
"reactive": false,
"searchEngine": false,
"serverPort": "8090",
"serviceDiscoveryType": false,
"skipCheckLengthOfIdentifier": false,
"skipFakeData": false,
"skipUserManagement": false,
"testFrameworks": [],
"websocket": "spring-websocket",
"withAdminUi": true
}
}
##### **Environment and Tools**
openjdk version "11.0.15" 2022-04-19
OpenJDK Runtime Environment Temurin-11.0.15+10 (build 11.0.15+10)
OpenJDK 64-Bit Server VM Temurin-11.0.15+10 (build 11.0.15+10, mixed mode)
git version 2.33.0.windows.2
node: v16.15.1
npm: 8.11.0
Docker version 20.10.17, build 100c701
Docker Compose version v2.6.1
Here is the error log It's long then I created a gist.
I found the solution, in the configurations, tomcat has been excluded, I remove that line and change undertow to tomcat then it's work. Thanks!
Given a normal user ('simpleROUser', only with the 'read' role on the database), it is throwing error when attempting to list collections.
The error message is:
Exception in thread "main" com.mongodb.MongoCommandException: Command failed with error 13 (Unauthorized): 'not authorized on wmMonitoring to execute command { listCollections: 1, cursor: {}, $db: "wmMonitoring", ...' on server xxxxxxx:27001. The full response is {"operationTime": {"$timestamp": {"t": 1614169303, "i": 1}}, "ok": 0.0, "errmsg": "not authorized on wmMonitoring to execute command { listCollections: 1, cursor: {}, $db: \"wmMonitoring\", ...
However, changing only the user credentials to one with 'root' role, it works (lists all the collections under the database 'wmMonitoring'.
I've checked the 'simpleROUser' privileges, the 'listCollections' is there.
rs-dev-00:PRIMARY> grants = db.getUser( "simpleROUser", { showCredentials: true, showPrivileges: true, showAuthenticationRestrictions: true } )
rs-dev-00:PRIMARY> grants.user
simpleROUser
rs-dev-00:PRIMARY> grants.inheritedPrivileges
[
{
"resource" : {
"db" : "wmISMonitoring",
"collection" : ""
},
"actions" : [
...
"listCollections",
...
]
},
...
]
rs-dev-00:PRIMARY>
So... what am I missing?
More info:
MongoDB server: Percona distribution, v4.4.1-3
Mongo Java Driver: v4.2.1
Found th issue.
In Brazilan Portuguese, this is also referred as "dedo gordo".
There was a typo in the grant: the actual database name is wmMonitoring.
Once the database name (in the grant command) has fixed, everything else worked.
I am trying to create a ingest pipeline using below PUT request:
{
"description": "ContentExtractor",
"processors": [
{
"extractor": {
"field": "contentData",
"target_field": "content"
}
}
]
}
But this is resulting in following error:
{
"error": {
"root_cause": [
{
"type": "not_x_content_exception",
"reason": "Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"
}
],
"type": "not_x_content_exception",
"reason": "Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"
},
"status": 500
}
I see below exception in ES logs:
org.elasticsearch.common.compress.NotXContentException: Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes
at org.elasticsearch.common.compress.CompressorFactory.compressor(CompressorFactory.java:57) ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.common.xcontent.XContentHelper.convertToMap(XContentHelper.java:65) ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.ingest.PipelineStore.validatePipeline(PipelineStore.java:154) ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.ingest.PipelineStore.put(PipelineStore.java:133) ~[elasticsearch-5.1.2.jar:5.1.2]
This problem happening when Elasticsearch is running in Solaris, same request works fine in case of Linux. What am I doing wrong? Can somebody help me to fix this issue?
Thanks in advance.
Got the exact same error message but (on different version of elasticsearch and) when querying with erroneous
data format (misinterpreted doc https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html : "request body" is expected as plain JSON -- it is not intended to explain HTTP request body)
or using old syntax within path of URL (just after 'index' in the URL) :
curl -XPUT -H "Content-Type: application/json" http://host:port/index/_mapping/_doc -d "mappings=#mymapping.json"
Just remove the "mappings=" and trailing path!
I've build a Java library to get the interaction of java application with Microsoft Onenote.
I recently discovered that the API has changed:
For getting a specific Sectionthe url was
https://www.onenote.com/api/v1.0/me/notes/sections/SECTION_ID
And is now:
https://graph.microsoft.com/v1.0/me/onenote/sections/SECTION_ID
Both are "v1.0" while both have a different signature :
Onenote API:
{
"#odata.context": "https://www.onenote.com/api/v1.0/$metadata#me/notes/sections(parentNotebook(id,name,self),parentSectionGroup(id,name,self))/$entity",
"id": "SECTION_ID",
"self": "https://www.onenote.com/api/v1.0/me/notes/sections/SECTION_ID",
"createdTime": "2014-05-29T08:56:57.223Z",
"name": "Adresses",
"createdBy": "xxxx",
"lastModifiedBy": "xxxx",
"lastModifiedTime": "2014-06-10T12:55:22.41Z",
"isDefault": false,
Microsoft Graph API:
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#users('xxx%40live.com')/onenote/sections/$entity",
"id": "SECTION_ID",
"self": "https://graph.microsoft.com/v1.0/users/xxx#live.com/onenote/sections/SECTION_ID",
"createdDateTime": "2014-05-29T08:56:57.223Z",
"displayName": "Adresses",
"lastModifiedDateTime": "2014-06-10T12:55:22.41Z",
"isDefault": false,
"pagesUrl": "https://graph.microsoft.com/v1.0/users/xxx#live.com/onenote/sections/SECTION_ID/pages",
"createdBy": {
"user": {
"id": "USER_ID",
"displayName": "xxxx"
}
},
"lastModifiedBy": {
"user": {
"id": "USER_ID",
"displayName": "xxxx"
}
},
I wonder if I need to upgrade to Microsoft Graph API or if it safe to remain with the Onenote API.
I don't find any documentation about the migration. All the links pointing to the old url are now pointing to the new url...
We encourage you to move to Microsoft Graph API because it has a single auth token for multiple services like OneNote, OneDrive, SharePoint, etc. But you could stay on the OneNote API and we still fully support it
I'm trying to setup on 3 servers. For the purpose of an example, I'm
trying to setup a class "client", with 3 clusters "client_1",
"client_2", and "client_3". My servers are called node1, node2, and
node3. I want the clusters arranged such that I have 2 copies of each
cluster, so if 1 node goes down I still have access to all the data, so for
example:
node1 is master for client_1 and has a copy of client_2.
node2 is master for client_2 and has a copy of client_3.
node3 is master for client_3 and has a copy of client_1.
I've tried setting this up with the following steps:
1. Download OrientDB 2.1.1 Community and extract onto the 3 servers.
2. Delete the GratefulDeadConcerts database from the databases directory on
each server.
3. Edit default-distributed-db-config.json on node1 as follows :
{
"autoDeploy": true,
"hotAlignment": false,
"executionMode": "undefined",
"readQuorum": 1,
"writeQuorum": 2,
"failureAvailableNodesLessQuorum": false,
"readYourWrites": true,
"clusters": {
"internal": {
},
"index": {
},
"client_1": {
"servers" : [ "node1","node2" ]
},
"client_2": {
"servers" : [ "node2","node3" ]
},
"client_3": {
"servers" : [ "node3","node1" ]
},
"*": {
"servers" : [ "<NEW_NODE>" ]
}
}
}
Start node1 with dserver.sh.
Create a database using console on node1:
connect remote:localhost root password
create database remote:localhost/testdb root password plocal graph
Create a class and rename the default cluster:
create class client extends v
alter cluster client name client_1
Startup node2 with dserver.sh, wait for database to auto deploy, then
startup node3 and wait for deploy
At this point I have a database on 3 nodes, with a class called "client"
with only one cluster "client_1".
On node2, add the client_2 cluster:
alter class client addcluster client_2
Similarly, on node3:
alter class client addcluster client_3
If I reconnect all console sessions and execute "list clusters" I now see
all 3 clusters of the client class on each node. I also see the .cpm and
.pcl files for each of the 3 clusters on each node. However, it appears
that my intention in default-distributed-db-config.json is being taken into
account as if I wait a couple of minutes and then insert a record from each
node I see that the timestamps and file sizes only change on the files
relating to the clusters that are supposed to be present on each node
(would be nice and less confusing if the files didn't exist on the wrong
nodes, but its not the end of the world).
So... now it appears that I have the database setup the way I intended, but
the point of doing this is so that we can survive a server going down, so I
shutdown node3 with ctrl-c. I can still see each of the records (I inserted
3, one per cluster) from both node1 and node2 - so far so good.
If I take a look at the contents of distirbuted-db.json on node1 or node2,
I now see my "client" class clusters have been reconfigured - there's no
node3 in the config any longer:
"client_3": { "servers": [ "node1" ], "#version": 0, "#type": "d" },
"client_2": { "servers": [ "node2" ], "#version": 0, "#type": "d" },
"client_1": { "servers": [ "node1", "node2" ], "#version": 0,
"#type": "d" }
Now I restart node3. The config is not getting updated again:
"client_3": { "servers": [ "node1" ], "#version": 0,
"#type": "d" },
"client_2": { "servers": [ "node2" ], "#version": 0, "#type": "d" },
"client_1": { "servers": [ "node1", "node2" ], "#version": 0,
"#type": "d" }
Is there something wrong in the way I've created/configured the database or is this a bug?
I think the issue here is that "hotAlignment" needs to be set to "true" in the file "default-distributed-db-config.json". Per the OrientDB 2.2.x sharding doc, "If hotAlignment=false is set, when a node re-joins the cluster (after a failure or simply unreachability) the full copy of database from a node could have no all information about the shards." Note, though, this bullet from the changes between 2.1.x to 2.2.x: "Removed hotAlignment setting: servers, once they join the cluster, remain always in the configuration until they are manually removed."