I am trying to reduce the number of methods that are generated by Google proto-buf and one of the alternatives is to use proto-buf nano. However I found no documentation on how to use it. Except the package link, I can't find anything on how to generate java files from proto files using nano.
So the question is straight-forward: how to use google proto nano in order to generate java classes from proto files and how to use them in a project?
Looking at the main protobuf compiler source code:
#include <google/protobuf/compiler/javanano/javanano_generator.h>
....
int main(int argc, char* argv[]) {
google::protobuf::compiler::CommandLineInterface cli;
cli.AllowPlugins("protoc-");
...
// Proto2 JavaNano
google::protobuf::compiler::javanano::JavaNanoGenerator javanano_generator;
cli.RegisterGenerator("--javanano_out", &javanano_generator,
"Generate Java source file nano runtime.");
return cli.Run(argc, argv);
}
looking at protobuf's Readme
Nano Generator options
java_package -> <file-name>|<package-name>
java_outer_classname -> <file-name>|<package-name>
java_multiple_files -> true or false
java_nano_generate_has -> true or false [DEPRECATED]
optional_field_style -> default or accessors
enum_style -> c or java
To use nano protobufs outside of Android repo:
Link with the generated jar file <protobuf-root>java/target/protobuf-java-2.3.0-nano.jar.
Invoke with --javanano_out, e.g.:
./protoc '--javanano_out=java_package=src/proto/simple-data.proto|my_package,java_outer_classname=src/proto/simple-data.proto|OuterName:.' src/proto/simple-data.proto
To use nano protobufs within the Android repo:
Set 'LOCAL_PROTOC_OPTIMIZE_TYPE := nano' in your local .mk file. When building a Java library or an app (package) target, the build
system will add the Java nano runtime library to the
LOCAL_STATIC_JAVA_LIBRARIES variable, so you don't need to.
Set 'LOCAL_PROTO_JAVA_OUTPUT_PARAMS := ...' in your local .mk file for any command-line options you need. Use commas to join multiple
options. In the nano flavor only, whitespace surrounding the option
names and values are ignored, so you can use backslash-newline or
'+=' to structure your make files nicely.
The options will be applied to all proto files in LOCAL_SRC_FILES when you build a Java library or package. In case different options
are needed for different proto files, build separate Java libraries
and reference them in your main target. Note: you should make sure
that, for each separate target, all proto files imported from any
proto file in LOCAL_SRC_FILES are included in LOCAL_SRC_FILES. This
is because the generator has to assume that the imported files are
built using the same options, and will generate code that reference
the fields and enums from the imported files using the same code
style.
Hint: 'include $(CLEAR_VARS)' resets all LOCAL_ variables, including the two above.
Simple nano example from https://android.googlesource.com/platform/external/protobuf/+/master/src/google/protobuf/.
unittest_simple_nano.proto
package protobuf_unittest_import;
option java_package = "com.google.protobuf.nano";
// Explicit outer classname to suppress legacy info.
option java_outer_classname = "UnittestSimpleNano";
message SimpleMessageNano {
message NestedMessage {
optional int32 bb = 1;
}
enum NestedEnum {
FOO = 1;
BAR = 2;
BAZ = 3;
}
optional int32 d = 1 [default = 123];
optional NestedMessage nested_msg = 2;
optional NestedEnum default_nested_enum = 3 [default = BAZ];
}
Command line
./protoc '--javanano_out=java_package=google/protobuf/unittest_simple_nano.proto|com.google.protobuf.nano,java_outer_classname=google/protobuf/unittest_simple_nano.proto|UnittestSimpleNano:target/generated-test-sources' google/protobuf/unittest_simple_nano.proto
Test extracted from NanoTest.java
public void testSimpleMessageNano() throws Exception {
SimpleMessageNano msg = new SimpleMessageNano();
assertEquals(123, msg.d);
assertEquals(null, msg.nestedMsg);
assertEquals(SimpleMessageNano.BAZ, msg.defaultNestedEnum);
msg.d = 456;
assertEquals(456, msg.d);
SimpleMessageNano.NestedMessage nestedMsg = new SimpleMessageNano.NestedMessage();
nestedMsg.bb = 2;
assertEquals(2, nestedMsg.bb);
msg.nestedMsg = nestedMsg;
assertEquals(2, msg.nestedMsg.bb);
msg.defaultNestedEnum = SimpleMessageNano.BAR;
assertEquals(SimpleMessageNano.BAR, msg.defaultNestedEnum);
byte [] result = MessageNano.toByteArray(msg);
int msgSerializedSize = msg.getSerializedSize();
//System.out.printf("mss=%d result.length=%d\n", msgSerializedSize, result.length);
assertTrue(msgSerializedSize == 9);
assertEquals(result.length, msgSerializedSize);
SimpleMessageNano newMsg = SimpleMessageNano.parseFrom(result);
assertEquals(456, newMsg.d);
assertEquals(2, msg.nestedMsg.bb);
assertEquals(SimpleMessageNano.BAR, msg.defaultNestedEnum);
}
There are a lot of test cases in the same class, and looking at the project pom you can find a maven-antrun-plugin configuration to generate that test resource classes
<!-- java nano -->
<exec executable="../src/protoc">
<arg value="--javanano_out=java_package=google/protobuf/unittest_import_nano.proto|com.google.protobuf.nano,java_outer_classname=google/protobuf/unittest_import_nano.proto|UnittestImportNano:target/generated-test-sources" />
<arg value="--proto_path=../src" />
<arg value="--proto_path=src/test/java" />
<arg value="../src/google/protobuf/unittest_nano.proto" />
<arg value="../src/google/protobuf/unittest_simple_nano.proto" />
<arg value="../src/google/protobuf/unittest_stringutf8_nano.proto" />
<arg value="../src/google/protobuf/unittest_recursive_nano.proto" />
<arg value="../src/google/protobuf/unittest_import_nano.proto" />
<arg value="../src/google/protobuf/unittest_enum_multiplejava_nano.proto" />
<arg value="../src/google/protobuf/unittest_multiple_nano.proto" />
</exec>
Hope this helps.
Related
Initial situation
I've made a little test for my project today - The goal: Implement .jar files into a C# project as a .dll. My current .java / .jar file looks like the following.
package ws;
public class Adding
{
public int Add(int a, int b)
{
return a + b;
}
}
I successfully converted the above into a .dll with IKVM (Version: 7.5.0.2).
I now want to reference this .dll in my C# project and call the Add(int a, int b) method. I already added the reference like so:
Anyways I am not able to call the method, because the compiler can't find the .dll reference..
using Adding; // <= Compiler Error CS0246 (Can't find the reference)
Console.WriteLine(Add(1, 2));
Does anybody know how I could achieve this? I highly appreciate any kind of help, cheers!
Edit 1: Decompiling
I've decompiled the .dll, as demanded in the comments, with ILSpy (Version: ILSpy 7.2), which results into the following output.
// C:\Users\maikh\Desktop\Adding.dll
// Adding, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null
// Global type: <Module>
// Architecture: AnyCPU (64-bit preferred)
// Runtime: v4.0.30319
// Hash algorithm: SHA1
using System.Diagnostics;
using System.Reflection;
using System.Runtime.CompilerServices;
using IKVM.Attributes;
[assembly: Debuggable(true, false)]
[assembly: RuntimeCompatibility(WrapNonExceptionThrows = true)]
[assembly: AssemblyVersion("0.0.0.0")]
[module: SourceFile(null)]
[module: JavaModule(Jars = new string[] { "Adding.jar" })]
[module: PackageList(new string[] { })]
I've also found some references, while decompiling the .dll. I don't know if this is important to mention, but I'll provide it anyways.
// Detected TargetFramework-Id: .NETFramework,Version=v4.0
// Detected RuntimePack: Microsoft.NETCore.App
// Referenced assemblies (in metadata order):
// IKVM.Runtime, Version=7.5.0.2, Culture=neutral, PublicKeyToken=00d957d768bec828
// Assembly reference loading information:
// There were some problems during assembly reference load, see below for more information!
// Error: Could not find reference: IKVM.Runtime, Version=7.5.0.2, Culture=neutral, PublicKeyToken=00d957d768bec828
// mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
// Assembly reference loading information:
// Info: Success - Found in Assembly List
// Assembly load log including transitive references:
// IKVM.Runtime, Version=7.5.0.2, Culture=neutral, PublicKeyToken=00d957d768bec828
// Error: Could not find reference: IKVM.Runtime, Version=7.5.0.2, Culture=neutral, PublicKeyToken=00d957d768bec828
// mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
// Info: Success - Found in Assembly List
Edit 2: Decompiling V2
I've managed to add the missing reference IKVM.Runtime. Nevertheless I can't find any informations about the namespace, class or method.
First of all, you are using a class as a namespace, and that is probably not correct. Your method call should probably look something like this:
var adder = new Adding();
Console.WriteLine(adder.Add(1, 2));
If that does not work I would inspect the produced dll to verify that it is a conforming .net dll. That should also show the namespaces, class names and other information. A decompiler like dotPeek or ilSpy might show the same information in a format that may be easier to read.
Since your Java class is in the ws package, you should be using ws in your C# code:
using ws;
Adding adding = new Adding();
Console.WriteLine(adding.Add(1, 2));
And if you want to call the Add method statically, declare it static in Java:
package ws;
public class Adding
{
public static int Add(int a, int b)
{
return a + b;
}
}
using ws;
Console.WriteLine(Adding.Add(1, 2));
I'm currently trying to build a Bazel 0.11.1 workspace with projects that use different Java language levels. The actual application uses Java 7, but for some code that won't ship, I want to settle with a more recent Java version to be able to use the new language features.
I can solve this to some extent by using --javacopt in .bazelrc, setting -source 1.7 -target 1.7 and override the defaults on a project level with the javacopts attribute, but this is not enough to ensure proper Java 7 compatibility when compiling with a more recent Java version. For this I really need to compile Java 7 projects against a Java 7 classpath as well.
The only way to use a custom bootclasspath seems to be via java_toolchain. It works. But I could not found a way to use different bootclasspaths for different projects, because the toolchain affects all projects and unlike with javacopts cannot be overridden at the project level.
Is this a usecase that is simply not (yet?) possible with Bazel? Or is there some trickery to make it work?
It turns out there is a way: write a custom rule that performs compilation.
The java_common module provides an interface to the compiler.
library.bzl
def _impl(ctx):
deps = []
for dep in ctx.attr.deps:
if java_common.provider in dep:
deps.append(dep[java_common.provider])
output_jar = ctx.new_file("lib{0}.jar".format(ctx.label.name))
runtime = java_common.JavaRuntimeInfo
compilation_provider = java_common.compile(
ctx,
source_jars = ctx.files.srcs_jars,
source_files = ctx.files.srcs,
output = output_jar,
javac_opts = ctx.attr.javac_opts,
deps = deps,
strict_deps = ctx.attr.strict_deps,
java_toolchain = ctx.attr.toolchain,
host_javabase = ctx.attr._host_javabase,
resources = ctx.files.resources,
neverlink = ctx.attr.neverlink,
)
return struct(
files = depset([output_jar]),
providers = [compilation_provider],
)
library = rule(
implementation = _impl,
attrs = {
"srcs_jars": attr.label_list(allow_files=True),
"srcs": attr.label_list(allow_files=True),
"javac_opts": attr.string_list(default=[]),
"deps": attr.label_list(),
"strict_deps": attr.string(default="ERROR"),
"toolchain": attr.label(default=Label("#bazel_tools//tools/jdk:toolchain")),
"sourcepath": attr.label_list(),
"resources": attr.label_list(),
"neverlink": attr.bool(default=False),
"_host_javabase": attr.label(default=Label("#local_jdk//:jdk")),
},
fragments = ["java"],
)
This rule I can now use to set a different toolchain for compilation.
BUILD
load('//build/jdk:library.bzl', 'library')
library(
name = "test",
srcs = glob(["src/main/java/**/*.java"]),
# data = glob(["src/main/resources/**"]),
toolchain = "//build/jdk:jdk8",
deps = ["..."],
)
Unfortunately I'm not 100% there yet. java_common.compile does not seem to have an equivalent for the data attribute of java_library, but for the most part the toolchain problem is solved.
My objective was to use JNI to access functions from kernel32.dll. As you can see below, I was doing pretty bad. I wrote down the whole procedure in the answer.
Kernel32.java :
package tn.kernel;
public final class Kernel32 {
public static boolean loadKernel32(){
System.loadLibrary("kernel32");
return true;
}
public static native boolean K32EnumProcesses(int[] pProcessIds, int cb, int[] pBytesReturned);
}
MainClass.java :
package tn.kernel;
public class MainClass {
public static void main(String[] args) {
System.out.println("Program started.");
if(Kernel32.loadKernel32())
System.out.println("Kernel32.dll loaded.");
int n = 2000;
int[] procs = new int[n];
int ls = Integer.SIZE;
int[] rs = new int[1];
if(Kernel32.K32EnumProcesses(procs, ls * n, rs)){
System.out.println("Success");
}
System.out.println("Done.");
}
}
OUTPUT :
Program started.
Kernel32.dll loaded.
Exception in thread "main" java.lang.UnsatisfiedLinkError: tn.kernel.Kernel32.K32EnumProcesses([II[I)Z
at tn.kernel.Kernel32.K32EnumProcesses(Native Method)
at tn.kernel.MainClass.main(MainClass.java:15)
This the syntax for EnumProcesses :
BOOL WINAPI EnumProcesses(
_Out_ DWORD *pProcessIds,
_In_ DWORD cb,
_Out_ DWORD *pBytesReturned
);
If PSAPI_VERSION is 2 or greater, this function is defined as K32EnumProcesses in Psapi.h and exported in Kernel32.lib and Kernel32.dll. If PSAPI_VERSION is 1, this function is defined as EnumProcesses in Psapi.h and exported in Psapi.lib and Psapi.dll as a wrapper that calls K32EnumProcesses. Source : msnd.microsoft.com
I tried with both K32EnumProcesses and EnumProcesses. Same results.
Creating a 64 bits Dynamic-Link Library for Windows
Prerequisites: Visual Studio
1/ Create a Visual Studio C++ project (ex: dllexample)
• Select “Win32 Consol Application”
Select “DLL”
Select “Empty project”
2/ In “Solution explorer” right-click on “Header Files” > “Add” > “New Item…” > choose a name (ex: dllexample.h) > “Add”
• Define the headers of your functions in “dllexample.h” this way:
__declspec(dllexport) <type> funcName(parameters…);
…
3/ In “Solution explorer” right-click on “Source Files” > “Add” > “New Item…” > choose a name (ex: dllexample.cpp) > “Add”
• Use:
#include “dllexample.h”
• Define the body of your functions (from the “dllexample.h” header file) in the “dllexample.cpp” source file:
<type> funcName(parameters…){
//body instructions
}
• In the upper toolbar select “x64”
• Select “Build” > “Build Solution”
4/ Done
• You can find “dllexample.dll” and “dllexample.lib” in “projects/dllexample/x64/Debug”
• You can find “dllexample.h” in “projects/dllexample/dllexample”
Calling a 64 bits DLL file (ex: dllexample.dll) from another 64 bits DLL or executable file on Windows
Prerequisites: “dllexample.dll”, “dllexample.lib” and “dllexample.h” or a precise functions description or guide and Visual Studio
1/ Create a Visual Studio C++ project (ex: dllcall)
• Select “Win32 Consol Application”
Select “DLL” to create a DLL file, “Consol Application” to create an executable file
Select “Empty project”
2/ Copy “dllexample.dll”, “dllexample.lib” and “dllexample.h” to “projects/dllcall/dllcall”
3/ In “Solution explorer” right-click on “Header Files” > “Add” > “Existing Item…” > select ”dllexample.h” > “Add”
• If you’re making a DLL file, create a new header file (ex: dllcall.h) in which you define the headers of your functions this way:
__declspec(dllexport) <type> funcName(parameters…);
…
4/ In “Solution explorer” right-click on “Source Files” > “Add” > “New Item…” > choose a name (ex: dllcall.cpp) > “Add”
• Use:
#include “dllexample.h”
• If you’re creating a DLL file, use:
#include “dllcall.h”
And then define the body of the functions (from the “dllcall.h” header file) in the “dllcall.cpp” source file. At the same time you can call functions from “dllexample.h”:
<type> funcName(parameters…){
//body instructions
}
• In the upper toolbar select “x64”
• In “Solution explorer”, right-click on “dllcall” > “Properties” > “Linker” > “Input” > “Additional Dependencies” > “Edit” > add “dllexample.lib” (this option will be set only for the x64 debugger of the current Visual Studio project)
• Select “Build” > “Build Solution” to generate the DLL and the Import Library (.lib) files, “Run” to generate the executable file and test it
5/ Done
• You can find “dllcall.dll” and “dllcall.lib” or “dllcall.exe” in “projects/dllcall/x64/Debug”
• You can find “dllcall.h” in “projects/dllcall/dllcall”
Calling a 64 bits DLL file (ex: dllexample.dll) from a 64 bits Java program through JNI
Prerequisites: “dllexample.dll”, “dllexample.lib” and “dllexample.h” or a precise functions description or guide. Visual Studio, Eclipse with JNI plugin
1/ Create an Eclipse Java project (ex: dlltest)
2/ Create a class (ex: my.package.JNIClass)
• Define methods headers as close as the functions definitions in “dllexample.h” using keywords:
public static final native <type> funcName(parameters..);
…
• Run the project to generate “JNIClass.class”
3/ Open Command Line in folder “workspace/dllcall/src”
• Generate “my_package_JNIClass.h” header file by running command:
javah my.package.JNIClass
4/ Create a 64 bits DLL file (ex: dllcall.dll) that calls “dllexample.dll” and includes “my_package_JNIClass.h”
• In the “dllcall.cpp” source file, define the bodies of functions that are defined in the “my_package_JNIClass.h” header file
• “my_package_JNIClass.h” includes “jni.h”, to make it work, you must go to “Solution explorer” in Visual Studio and right-click on “Properties” > “Configuration Properties” > “C/C++” > “General” > “Additional Include Directories” > add the 64 bits “java/include” and “java/include/win32” paths (this option will be set only for the x64 debugger of the current Visual Studio project)
5/ Copy “dllcall.dll” and “dllexample.dll” to “workspace/dllcall/src”
• In “Package explorer”, right-click on “dlltest” > “Properties” > “Java Build Path” > “Source” > expand “my/package/src” > select “Native Library Location” > “Edit” > add “my/package/src” as location path
• Import the DLL files in “JNIClass.java” using:
static {
System.loadLibrary(“dllexample”);
System.loadLibrary(“dllcall”);
}
6/ If the 64 bits JRE is not selected, then go to “Run” > “Run Configurations” > “JRE” > “Alternate JREs” > “Installed JREs” > put the 64 bits Java JDK directory (Should be something like: “C:\Program Files\Java \jdk”, while 32 bits Java JDK can be found in the “Program Files (x86)” folder
7/ Done
• Now you can use the methods you defined in step 2
I have a java project and an ANT script to build and then distribute the project to other projects with a simple copy command.
I would like to only name the place where to copy to files to once in the header of the ant script, and not have an explicit copy-task for every project that is dependent on this project.
I can't find anything like arrays in ANT, so what would be the cleanest way of distributing something to multiple directories?
According to what I commented under Martin's answer, I'd like to post my version of solution as another choice. And I am using property names from Martin's answer to make it clear.
<target name="deploy" >
<property name="dest.dirs" value="/dir/one,/dir/two,/dir/thr/ee" />
<for list="${dest.dirs}" param="dest.dir" parallel="true" delimiter="," >
<sequential>
<copy todir="#{dest.dir}" >
<fileset dir="${srd.dir}" />
</copy>
</sequential>
</for>
</target>
Please note that "for" is an Ant-Contrib task, and it's using Macrodef in the back so you should use #{} to refer to "dest.dir"; the "dest.dirs" will be splited into a list (maybe String[]) by delimiter. Here we use comma to split it (and the default value to delimiter is comma). I also added "parallel" to make it copy files to all the "dest.dirs" at same time, however, if the project to copy is large, you should delete "parallel".
Please check http://ant-contrib.sourceforge.net/tasks/tasks/for.html
and http://ant-contrib.sourceforge.net/tasks/tasks/foreach.html for more information.
I don't believe you have many viable options: the copy task accepts only a single directory.
Create your own copy task that takes a list of directories.
Exec a script/program that does the copying.
Have the subprojects do a pull.
I'm really hesitant about having a project push to other projects, because that makes the assumption that those projects will work with the newly-pushed code. IMO the "sub-"projects should be making the decision if they want the new version or not.
To me this sounds more like a dependency management issue, better handled with Ivy/Maven/Gradle (or other Maven-alike).
All that said, it sounds like you'd want to do option 1, create a custom Ant task that accepts a list of destination directories; it might be pretty easy to just extend the existing copy task to get all its functionality--just add a "todirs" property.
You might consider using a scriptmapper in your copy task with enablemultiplemappings true.
First, list the target directories in a property and create a filelist from it. (You could use a dirset, but the API for filelist is simpler.) Then run the copy, with the scriptmapper setting up the multiple destinations.
<property name="dest.dirs" value="/dir/one,/dir/two,/dir/thr/ee" />
<filelist id="dests" dir="/" files="${dest.dirs}" />
<copy todir="/" enablemultiplemappings="yes">
<fileset dir="${srd.dir}" />
<scriptmapper language="javascript">
<![CDATA[
// Obtain a reference to the filelist
var filelist = project.getReference( "dests" );
var dests = filelist.getFiles( project );
for ( var i = 0; i < dests.length; i++ )
{
self.addMappedName( dests[i] + "/" + source );
}
]]>
</scriptmapper>
</copy>
I have a java project that is built with buildr and that has some external dependencies:
repositories.remote << "http://www.ibiblio.org/maven2"
repositories.remote << "http://packages.example/"
define "myproject" do
compile.options.target = '1.5'
project.version = "1.0.0"
compile.with 'dependency:dependency-xy:jar:1.2.3'
compile.with 'dependency2:dependency2:jar:4.5.6'
package(:jar)
end
I want this to build a single standalone jar file that includes all these dependencies.
How do I do that?
(there's a logical followup question: How can I strip all the unused code from the included dependencies and only package the classes I actually use?)
This is what I'm doing right now. This uses autojar to pull only the necessary dependencies:
def add_dependencies(pkg)
tempfile = pkg.to_s.sub(/.jar$/, "-without-dependencies.jar")
mv pkg.to_s, tempfile
dependencies = compile.dependencies.map { |d| "-c #{d}"}.join(" ")
sh "java -jar tools/autojar.jar -baev -o #{pkg} #{dependencies} #{tempfile}"
end
and later:
package(:jar)
package(:jar).enhance { |pkg| pkg.enhance { |pkg| add_dependencies(pkg) }}
(caveat: I know little about buildr, this could be totally the wrong approach. It works for me, though)
I'm also learning Buildr and currently I'm packing Scala runtime with my application this way:
package(:jar).with(:manifest => _('src/MANIFEST.MF')).exclude('.scala-deps')
.merge('/var/local/scala/lib/scala-library.jar')
No idea if this is inferior to autojar (comments are welcome), but seems to work with a simple example. Takes 4.5 minutes to package that scala-library.jar thought.
I'm going to use Cascading for my example:
cascading_dev_jars = Dir[_("#{ENV["CASCADING_HOME"]}/build/cascading-{core,xml}-*.jar")]
#...
package(:jar).include cascading_dev_jars, :path => "lib"
Here is how I create an Uberjar with Buildr, this customization of what is put into the Jar and how the Manifest is created:
assembly_dir = 'target/assembly'
main_class = 'com.something.something.Blah'
artifacts = compile.dependencies
artifacts.each do |artifact|
Unzip.new( _(assembly_dir) => artifact ).extract
end
# remove dirs from assembly that should not be in uberjar
FileUtils.rm_rf( "#{_(assembly_dir)}/example/package" )
FileUtils.rm_rf( "#{_(assembly_dir)}/example/dir" )
# create manifest file
File.open( _("#{assembly_dir}/META-INF/MANIFEST.MF"), 'w') do |f|
f.write("Implementation-Title: Uberjar Example\n")
f.write("Implementation-Version: #{project_version}\n")
f.write("Main-Class: #{main_class}\n")
f.write("Created-By: Buildr\n")
end
present_dir = Dir.pwd
Dir.chdir _(assembly_dir)
puts "Creating #{_("target/#{project.name}-#{project.version}.jar")}"
`jar -cfm #{_("target/#{project.name}-#{project.version}.jar")} #{_(assembly_dir)}/META-INF/MANIFEST.MF .`
Dir.chdir present_dir
There is also a version that supports Spring, by concatenating all the spring.schemas