Azure Synapse Database To And From Netezza - Most Efficient Approach - java

We were hoping to load data to Azure Synapse (Cloud) from Netezza and vice versa using Qlik however we are finding the performance is unacceptable. What is the fastest way to achieve this?
We have some in-house tools written in Java that perform this task however I have no clue how to run this code on the native cloud environment, or whether this is even feasible.
I do not have much experience with Cloud so any guidance about where to spend my time to get to my goal quicker would be appreciated.

From Netezza the fastest is ‘create external table as select….’
If your Netezza is new enough (CP4D) you can even refer to a file location ON cloud, but otherwise you may need a (fast) file stores on both Azure and on-Prem
A bit of reading:
https://learn.microsoft.com/en-us/azure/synapse-analytics/sql/develop-tables-external-tables?tabs=hadoop
https://www.ibm.com/docs/en/SSULQD_7.2.1/com.ibm.nz.load.doc/c_load_create_external_tbl_syntax.html
Basically you will need to use UTF8 (aka ‘internal’ on Netezza) and choose 5 special characters:
an escape character (usually ‘\’)
a column delimiter (usually the TAB character)
a row delimiter (usually new-line)
a string delimiter (usually double quotes ‘“‘)
a NULL indicator character (usually a star ‘*’)
Choose the same 5 characters in both ends and do a binary file-transfer of some sort (xFTPx, HTTP or a dedicated Azure Copy tool of some sort) an you should be good :)

Related

Right datastore to store vernacular application data in backend, based on users input language

We are building a vernacular application in which we need to show the information/content to the user based on the language selected by user. Currently we support English and 10 regional languages.
The data is static(gets changed but less frequently) but we still needs to serve it from the backend based on the input header : APP_LANGUAUGE : EN in the api-calls, we get from the client.
We initially thought that the simplest way to do this is, we can store the constants for all languages supported, on the app config files and based on the language header, we can serve that language constants. We can't store in DB (as DB will cost us the 'select' query time and thus increases the app loading times, we really don't want that as well).
Is there a better place(backend data-store) to save the values for these constants (for all the languages
supported) for instance in zookeeper. How would you make the
trade-off here between config files or zookeeper or any other config server ? These values are not going to get changed very frequently but does change some times.
Which amount of data do you have?
Looks like you could safe the constants in a file and load them on app start in your local heap space e.g. in a Map, because they will be changed very rarely.
Of course you could use ehcache or other key-value stores as well.

How to check if a Java String will fit into Cassandra TEXT column before writing it?

We support an application that has some bad design.
This application stores data in a Cassandra cluster in a TEXT column and sometimes writes quite large Strings in this column and we get a WriteFailureException.
Cassandra has a limit on the write size (16mb by default: https://docs.datastax.com/en/dse/6.7/dse-admin/datastax_enterprise/config/configCassandra_yaml.html#configCassandra_yaml__max_mutation_size_in_kb) which is great.
We would like to notify the user that they are trying to write a large chunk of data in case such limit is reached.
As I understand there is no way to distinguish whether this exception occurred because of this limit or due to any other errors inside the Cassandra cluster.
It would be even better to check if the size of the date exceeds the limit before trying to write it in Cassandra.
Java String is UTF-16, Cassandra's TEXT is UTF-8, so my naive approach is to convert a String to UTF-8 and check it's size like that: s.getBytes(StandardCharsets.UTF_8).lenght()
However this seems quite expensive to convert a String to UTF-8 just to throw it away.
Is there a sane way to do it? How do people check if their data fits in Cassandra before writing it?
Java 8, Cassandra 3.11
The better way is to check the size not of the individual strings, but the size of the request, because it's also dependent on the protocol version. If you're using prepared statements, then you can bind values and then call requestSizeInBytes on the bound statement (for driver 3.x), like this (source code)
int stmtSize = boundStatement.requestSizeInBytes(protocolVersion, codecRegistry);
For driver 4.x it's the computeSizeInBytes function (doc)
But take into account that it's approximate size, but it could be quite good approximation anyway

WebApp: suggestion about how save a lot of information from a Sphinx search

I have an issue about my webapp: it's a intranet search webapp that asks Sphinx http://sphinxsearch.com/ (the real search engine) for a query typed by the user. The problem is that the result could be very big (also for a intranet network) so I want to save the result on the server to handle a sort of lazy load of the data.
I suppose to use Hibernate but...if the result is too big and I save, for example, 40.000 items...will it be too effort for hibernate? And retrieving them?!
Any suggestions?
Thanks in advance
You can use a limit and offset in the sphinxsearch itself: http://sphinxsearch.com/docs/2.1.3/api-func-setlimits.html. As from this doc a word about limiting the result from the sphinx server (which is 1000 by default):
One thousand records is enough to present to the end user. And if you're thinking about pulling the results to application for further sorting or filtering, that would be much more efficient if performed on Sphinx side.
maybe I'm missing something, but why not just get it piecemeal direct out of sphinx? You can jsut get small pages worth of results at a time. with setLimits.
That way you dont download the data as you need it.

Fastest way to export keys from cassandra

What is the fastest way to export all the rowkeys from a column family in cassandra (0.7.x and later versions) with Java APIs or other tools ?
Currently I am using the Java Pelops API, and paging through all records, but Im wondering if there is a better mechanism.
I am specifically interested in only exporting the rowkeys (no columns/subcolumns), so Im wondering if there is a section of the cassandra direct storage APIs that could be used to do this as quickly as possible (bypassing thrift).
What about using Java hector client. Sample taken from
https://github.com/rantav/hector/wiki/User-Guide
RangeSlicesQuery<String, String, String> rangeSlicesQuery =
HFactory.createRangeSlicesQuery(keyspace, stringSerializer,
stringSerializer, stringSerializer);
rangeSlicesQuery.setColumnFamily("Standard1");
rangeSlicesQuery.setKeys("fake_key_", "");
rangeSlicesQuery.setReturnKeysOnly(); // use this
rangeSlicesQuery.setRowCount(5);
Result<OrderedRows<String, String, String>> result = rangeSlicesQuery.execute();
thrift is API interface for cassandra. Going directly to storage would require you to read data files in binary. Code above should give you good performance.
If you need this for one time export then I would say it's OK. If you need this for production you should reconsider your data-model - you may be doing something wrong.
You may need to split the query using multiple key ranges in case you need to scan many rows.

read/write to a large size file in java

i have a binary file with following format :
[N bytes identifier & record length] [n1 bytes data]
[N bytes identifier & record length] [n2 bytes data]
[N bytes identifier & record length] [n3 bytes data]
as you see i have records with different lengths. in each record i have N bytes fixed which contains and id and the length of data in record.
this file is very big and can contains 3 millions records.
I want to open this file by an application and let user to browse and edit the records.
( Insert / Update / Delete records)
my initial plan is to create and index file from original file and for each record, keep next and previous record address to navigate forward and backward easily. (some sort of linked list but in file not in memory)
is there library (java library) to help me to implement this requirement ?
any recommendation or experience that you think is useful?
----------------- EDIT ----------------------------------------------
Thanks for guides and suggestions,
some more info:
the original file and its format is out of my control (it's a third party file) and i can't change the file format. but i have to read it, let user to navigate over records and edit some of them (insert new record/ update an existing record/ delete a record) and at the end save it back to original file format.
do u still recommend DataBase instead of a normal index file ?
----------------- SECOND EDIT ----------------------------------------------
record size in update mode is fixed. it means updated (edited) record has same length as original record's, unless user delete the record and create another record with different format.
Many Thanks
Seriously, you should NOT be using a binary file for this. You should use a database.
The problems with trying to implement this as a regular file stem from the fact that operating systems do not allow you to insert extra bytes into the middle of an existing file. So if you need to insert a record (anywhere but the end), update a record (with a different size) or remove a record, you would need to:
rewrite other records (after the insertion/update/deletion point) to make or reclaim space, or
implement some kind of free space management within the file.
All of this is complicated and / or expensive.
Fortunately, there is a class of software that implements this kind of thing. It is called database software. There are a wide range of options, ranging from using a full-scale RDBMS to light-weight solutions like BerkeleyDB files.
In response to your 1st and 2nd edits, a database will still be simpler.
However, here's an alternative that might perform better for this use-case than using a DB... without doing complicated free-space management.
Read the file and build an in-memory index that maps ids to file locations.
Create a second file to hold new and updated records.
Perform the record adds/updates/deletes:
An addition is handled by writing the new record to the end of the second file, and adding an index entry for it.
An update is handled by writing the updated record to the end of the second file, and changing the existing index entry to point to it.
A delete is handled by deleting the index entry for the record's key.
Compact the file as follows:
Create a new file.
Read each record in the old file in order, and check the index for the record's key. If the entry still points to the location of the record, copy the record to the new file. Otherwise skip it.
Repeat the step 4.2 for the second file.
If we completed all of the above successfully, delete the old file and second file.
Note this relies on being able to keep the index in memory. If that is not feasible, then the implementation is going to be more complicated ... and more like a database.
Having a data file and an index file would be the general base idea for such an implementation, but you'd pretty much find yourself dealing with data fragmentation upon repeated data updates/deletion, etc. This kind of project, in itself, should be a separate project and should not be part of your main application. However, essentially, a database is what you need as it is specifically designed for such operations and use cases and will also allow you to search, sort, and extend (alter) your data structure without having to refactor an in-house (custom) solution.
May I suggest you to download Apache Derby and create a local embedded database (derby does it for you want you create a new embedded connection at run-time). It will not only be faster than anything you'll write yourself, but will make your application easier to maintain.
Apache Derby is a single jar file that you can simply include and distribute with your project (check the license if any legal issue may apply in your app). There is no need for a database server or third party software; it's all pure Java.
Bottom line as that it all depends on how large is your application, if you need to share the data across many clients, if speed is a critical aspect of your app, etc.
For a stand-alone, single user project, I recommend Apache Derby. For a n-tier application, you might want to look into MySQL, PostgreSQL or (hrm) even Oracle. Using already made and tested solutions is not only smart, but will cut down your development time (and maintenance efforts).
Cheers.
Generally you are better off letting a library or database do the work for you.
You may not want to have an SQL database and there are plenty of simple databases which don't use SQL. http://nosql-database.org/ lists 122 of them.
At a minimum, if you are going to write this I suggest you read the source for one of these databases to see how they work.
Depending on the size of the records, 3 million isn't that much and I would suggest you keep as much in memory as possible.
The problem you are likely to have is ensuring the data is consistent and recovering the data when a corruption occurs. The second problem is dealing with fragmentation efficiently (some thing the brightest minds working on the GC deal with) The third problem is likely to be maintain the index in a transaction fashion with the source data to ensure there are no inconsistencies.
While this may appear simple at first, there are significant complexities in making sure there data is reliable, maintainable and can be accessed efficiently. This is why most developers use an existing database/datastore library and concentrate on the features which are unqiue to their application.
(Note: My answer is about the problem in general, not considering any Java libraries or - like the other answers also proposed - using a database (library), which might be better than reinventing the wheel)
The idea to create an index is good and will be very helpful performance-wise (although you wrote "index file", I think it should be kept in memory). Generating the index should be quite fast if you read the ID and record length for each entry and then just skip the data with a file seek.
You should also think about the edit functionality. Especially inserting and deleting can be very slow on such a big file if you do it wrong (f.e. deleting and then moving all the following entries to close the gap).
The best option would be to only mark deleted entries as deleted. When inserting, you can overwrite one of those or append to the end of the file.
Insert / Update / Delete records
Inserting (rather than merely appending) and deleting records to a file is expensive because you have to move all the following content of the file to create space for the new record or to remove the space it used. Updating is similarly expensive if the update changes the length of the record (you say they are variable length).
The file format you propose is fundamentally unsuitable for the kinds of operations you want to perform. Others have suggested using a data-base. If you don't want to go that far, adding an index file (as you suggest) is the way to go. I recommend making the index records all the same length.
As others have stated a database would seem a better solution. The following are Java SQL DB's that could be used: H2, Derby or HSQLDB
If you want to use an index file look at Berkley DB or No Sql
If there is some reason for using a file, look at JRecord . It has
Several Classes for reading/writing files with variable length binary records (they where written for Cobol VB files). Any of Mainframe / Fujitsu / Open Cobol VB file structures should do the job.
An Editor for editing JRecord files. The latest version of the Editor can handle large files (it uses Compression / spill file). The editor suffers from having to download the whole file and only one user can edit the file at one time.
The JRecord solution will only work if
There is a limited number (preferably one) users all located in the one location
Fast infostructure

Categories

Resources