How to get/set application name in .dxf for Groupcode 1001? - java

I convert geodata (coordinates, attributes,...) to a dxf file.
I write attributes into extended data, but under the group code 1001 there must be an application name. I tried to write "Test" and some other words in it, but nothing works.
I receive the error message:
Invalid application name in 1001 group on line 50.
What is the application name in this context, where can I get it or whatever?

You are correct that DXF group 1001 should contain the Application ID of the Extended Entity Data (xData) attached to your entity.
This application ID may be an arbitrary name which fulfils the requiremnts of a symbol table name (which are documented as part of the AutoLISP snvalid function). When specifying an Application ID, you should try to ensure that it is unique and you should AVOID using ACAD, as this is reserved and used internally by AutoCAD.
The key point that is causing your file to fail to be parsed is that every Application ID referenced by xData within the file must also appear as a symbol table name within the APPID symbol table.

Related

Filtering a 'null' ID in Talend. Using a tFilterRow

I am using Talend to migrate information into Salesforce. As of right now, when running the job I receive an error- "java.lang.Exception: attempted to add a campaign member where either the member id '0033700009Ms49' or the campaign id 'null' is null.
Now, I put down a tFilterRow to filter out any campaign Id that is equal to 'null' and any contact Id that is equal to 'null'. I even put the specific ID that appeared in the error log. Could anyone point me in the right direction? I am beyond stumped.

Cannot find document when searching for field with Id using Java API in ElasticSearch

I have a field which contains forward slashes. I'm trying to execute this Query:
QueryBuildres.termQuery("id", QueryParser.escape("/my/field/val"))
and I cannot get any results. When I'm looking for 'val' only, then I get the proper results. Any ideas why is that happening? Of course without escaping it also doesn't return the results.
UPDATE
so QP.escape parses string properly, but when request goes to elasticsearch it's double escaped
[2015-07-10 01:53:00,063][WARN ][index.search.slowlog.query] [Aaa AA] [index_name][4] took[420.8micros], took_millis[0], types[page], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{"query":{"term":{"pageId":"\\/path\\/and\\/testestest"}}}], extra_source[],
UPDATE 2: It works when I'm using querystring, but I wouldn't like to user that and type everything by hand.
You might have to use _id instead of Id
So the reason why I didn't get any results is the default index which I had created.
I didn't specified mapping for my field, so ElasticSearch didn't treated my field.
In ElasticSearch documentation I read, that during the analysis process, elastic search splits the string into words, lower-case them and do some other stuff.
In my case my "/path/in/my/field" was splitted into four fields:
path
in
my
field
So when I was searching for "pageId:/path/in/my/field" I didn't get any results because pageId in fact didn't contained it.
To solve the issue I had to add proper mapping to pageId field, which didn't do any preprocessing (instead of four words, now I have one "/path/in/my/field")
Links to docs:
https://www.elastic.co/guide/en/elasticsearch/guide/current/analysis-intro.html
https://www.elastic.co/guide/en/elasticsearch/guide/current/mapping-intro.html

How to get last status message of each record using SQL?

Consider the following scenario.
I have a Java application which uses Oracle database to store some status codes and messages.
Ex: I have patient record that process in 3 layers (assume 1. Receiving class 2. Translation class 3. Sending class). We store data into database in each layer. When we run query it will show like this.
Name Status Status_Message
XYZ 11 XML message received
XYZ 21 XML message translated to swift format
XYZ 31 Completed message send to destination
ABC 11 XML message received
ABC 21 XML message translated to swift format
ABC 91 Failed message send to destination
On Java class I am executing the below query to get the last status message.
select STATUS_MESSAGE from INTERFACE_MESSAGE_STATUS
where NAME = ? order by STATUS
I publish this status message on a webpage. But my problem is I am not getting the last status message; it's behaving differently. It is printing sometimes "XML message received", sometimes "XML message translated to swift format", etc.
But I want to publish the last status like "Completed message send to destination" or "Failed message send to destination" depending on the last status. How can I do that? Please suggest.
You can use a query like this:
select
i.STATUS_MESSAGE
from
INTERFACE_MESSAGE_STATUS i,
(select
max(status) as last_status
from
INTERFACE_MESSAGE_STATUS
where
name = ?) s
where
i.name = ? and
i.status = s.last_status
In the above example I am assuming the status with the highest status is the last status.
I would recommend you to create a view out of this select query and then use that in your codebase. The reason is that it is much easier to read and makes it possible to easily select on multiple last_statuses without complicating your queries too much.
You have no explicit ordering specified. As the data is stored in a HEAP in Oracle, there is no specific order given. In other words: many factors influence the element you get. Only explicit ORDER BY guarantees desired order. And/or creating a index on some of the rows.
My suggestion: add a date_created field to your DB and sort based on that.

db already exists with different case other

i try to read data from a MongoDB. and i have a problem:
Exception in thread "main" com.mongodb.MongoException: db already exists with different case other
the exeption throws from here:
DBCursor cur[] = new DBCursor[cursorSize];
...
cur[i].hasNext() // Exeption
what is the problem?
the version of Mongo is 2.10.1
This error indicates that you are trying to create a database that differs by case only from a database name that already exists. For example, if you already have a database called "test", you will get this error trying to create "Test", "TEST", or other variations of upper or lower case for the existing name.
The database name is used in naming the data extent files, so clashes in name could cause Bad Things to happen on case-insensitive file systems.
The MongoDB manual has further details on Naming Restrictions, including case sensitivity and restrictions specific to different operating systems.
The useful part of the error message appears to have been omitted in the question description, but what you should see as part of this message is the name of the existing database as well as the new name that is being rejected.
The corresponding MongoDB 2.4 server code snippet is:
ss << "db already exists with different case other: [" << duplicate << "] me [" << _name << "]";
I think Stennie has very well defined and explained why you might be getting this error. However, in my case, I encountered an interesting case which you or others may also encounter. I had the the database called "HDB" but when I added my user to system.users collection with "db":"hdb" (lower case). So, I spent an hour or so trying to see what could have gone wrong while I was able to login. So, if you get this error make sure you did not accidentally add your user with lower/different case for db name. To confirm this.
1.Log in as the admin/default account run
db.system.users.find().pretty();
and then look for the user name that is getting this error along with the "db" in that json object and compare it against actual database you have.
Run
show dbs;
compare the db you see in step one against the name of db that you see in this step. (the command will show you all the databses you have but clearly you should only be concerned with the ones that you use/see in step one).

Error in importing a tsv to hbase

I created a table in hbase using:
create 'Province','ProvinceINFO'
Now, I want to import my data from a tsv file to it. My table in tsv have two columns: ProvinceID (as pk), ProvinceName
I am using the below code for import:
bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv '-Dimporttsv.separator=,'
-Dimporttsv.columns= HBASE_ROW_KEY, ProvinceINFO:ProvinceName Province /usr/data
/Province.csv
but it gives me this error:
ERROR: No columns specified. Please specify with -Dimporttsv.columns=...
Usage: importtsv -Dimporttsv.columns=a,b,c <tablename> <inputdir>
Imports the given input directory of TSV data into the specified table.
The column names of the TSV data must be specified using the -Dimporttsv.columns
option. This option takes the form of comma-separated column names, where each
column name is either a simple column family, or a columnfamily:qualifier. The special
column name HBASE_ROW_KEY is used to designate that this column should be used
as the row key for each imported record. You must specify exactly one column
to be t he row key, and you must specify a column name for every column that exists in
the
input data. Another special columnHBASE_TS_KEY designates that this column should be
used as timestamp for each record. Unlike HBASE_ROW_KEY, HBASE_TS_KEY is optional.
You must specify at most one column as timestamp key for each imported record.
Record with invalid timestamps (blank, non-numeric) will be treated as bad record.
Note: if you use this option, then 'importtsv.timestamp' option will be ignored.
By default importtsv will load data directly into HBase. To instead generate
HFiles of data to prepare for a bulk data load, pass the option:
-Dimporttsv.bulk.output=/path/for/output
Note: if you do not use this option, then the target table must already exist in HBase
Other options that may be specified with -D include:
-Dimporttsv.skip.bad.lines=false - fail if encountering an invalid line
'-Dimporttsv.separator=|' - eg separate on pipes instead of tabs
-Dimporttsv.timestamp=currentTimeAsLong - use the specified timestamp for the import
-Dimporttsv.mapper.class=my.Mapper - A user-defined Mapper to use instead of
org.apache.hadoop.hbase.mapreduce.TsvImporterMapper
-Dmapred.job.name=jobName - use the specified mapreduce job name for the import
For performance consider the following options:
-Dmapred.map.tasks.speculative.execution=false
-Dmapred.reduce.tasks.speculative.execution=false
Maybe also try wrapping column into a string, i.e.
bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=','
-Dimporttsv.columns="HBASE_ROW_KEY, ProvinceINFO:ProvinceName" Province /usr/data
/Province.csv
You should try something like:
bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=','
-Dimporttsv.columns= HBASE_ROW_KEY, ProvinceINFO:ProvinceName Province /usr/data
/Province.csv
Try to remove the spaces in -Dimporttsv.columns=a,b,c.

Categories

Resources