Telegram bot has a file size limit for sending in 50MB.
I need to send large files. Is there any way around this?
I know about this project https://github.com/pwrtelegram/pwrtelegram but I couldn't make it work.
Maybe someone has already solved such a problem?
There is an option to implement the file upload via Telegram API and then send by file_id with bot.
I write a bot in Java using the library https://github.com/rubenlagus/TelegramBots
UPDATE
For solve this problem i use telegram api, that has limit on 1.5 GB for big files.
I prefer kotlogram - the perfect lib with good documentation https://github.com/badoualy/kotlogram
UPDATE 2
Example of something how i use this lib:
private void uploadToServer(TelegramClient telegramClient, TLInputPeerChannel tlInputPeerChannel, Path pathToFile, int partSize) {
File file = pathToFile.toFile();
long fileId = getRandomId();
int totalParts = Math.toIntExact(file.length() / partSize + 1);
int filePart = 0;
int offset = filePart * partSize;
try (InputStream is = new FileInputStream(file)) {
byte[] buffer = new byte[partSize];
int read;
while ((read = is.read(buffer, offset, partSize)) != -1) {
TLBytes bytes = new TLBytes(buffer, 0, read);
TLBool tlBool = telegramClient.uploadSaveBigFilePart(fileId, filePart, totalParts, bytes);
telegramClient.clearSentMessageList();
filePart++;
}
} catch (Exception e) {
log.error("Error uploading file to server", e);
} finally {
telegramClient.close();
}
sendToChannel(telegramClient, tlInputPeerChannel, "FILE_NAME.zip", fileId, totalParts)
}
private void sendToChannel(TelegramClient telegramClient, TLInputPeerChannel tlInputPeerChannel, String name, long fileId, int totalParts) {
try {
String mimeType = name.substring(name.indexOf(".") + 1);
TLVector<TLAbsDocumentAttribute> attributes = new TLVector<>();
attributes.add(new TLDocumentAttributeFilename(name));
TLInputFileBig inputFileBig = new TLInputFileBig(fileId, totalParts, name);
TLInputMediaUploadedDocument document = new TLInputMediaUploadedDocument(inputFileBig, mimeType, attributes, "", null);
TLAbsUpdates tlAbsUpdates = telegramClient.messagesSendMedia(false, false, false,
tlInputPeerChannel, null, document, getRandomId(), null);
} catch (Exception e) {
log.error("Error sending file by id into channel", e);
} finally {
telegramClient.close();
}
}
where TelegramClient telegramClient and TLInputPeerChannel tlInputPeerChannel you can create as write in documentation.
DON'T COPY-PASTE, rewrite on your needs.
With local Telegram Bot API server you are allowed to send InputStream with a 2000Mb file size limit, raised from 50Mb default.
IF you want to send file via telegram bot, you have three options:
InputStream (10 MB limit for photos, 50 MB for other files)
From http url (Telegram will download and send the file. 5 MB max size for photos and 20 MB max for other types of content.)
Send cached files by their file_ids.(There are no limits for files sent this way)
So, I recommend you to store file_ids beforehand and send files by these ids (this is recommended by api docs too).
Using a Local Bot API Server you can send a large file up to 2GB.
GitHub Source Code:
https://github.com/tdlib/telegram-bot-api
Official Documentation
https://core.telegram.org/bots/api#using-a-local-bot-api-server
You can build and install this to your server by following the instructions on this link https://tdlib.github.io/telegram-bot-api/build.html
basic setup :
Generate Telegram Applications id from https://my.telegram.org/apps
Start the server ./telegram-bot-api --api-id=<your-app-id> --api-hash=<your-app-hash> --verbosity=20
Default address is http://127.0.0.1:8081/ and the port is 8081.
All the official APIs will work with this setup. Just change the address to http://127.0.0.1:8081/bot/METHOD_NAME reference: https://core.telegram.org/bots/api
Example Code:
OkHttpClient client = new OkHttpClient().newBuilder()
.build();
MediaType mediaType = MediaType.parse("text/plain");
RequestBody body = new MultipartBody.Builder().setType(MultipartBody.FORM)
.addFormDataPart("chat_id","your_chat_id_here")
.addFormDataPart("video","file_location",
RequestBody.create(MediaType.parse("application/octet-stream"),
new File("file_location")))
.addFormDataPart("supports_streaming","true")
.build();
// https://127.0.0.1:8081/bot<token>/METHOD_NAME
Request request = new Request.Builder()
.url("http://127.0.0.1:8081/bot<token>/sendVideo")
.method("POST", body)
.build();
Response response = client.newCall(request).execute();
Related
I'm writing a program that builds stuff in a GUI (blah blah blah... irrelevant details), and the user is allowed to export that data as a .tex file which can be compiled to a PDF. Since I don't really want to assume they have a TeX environment installed, I'm using an API (latexonline.cc). That way, I can construct an HTTP GET request, send it to the API, then (hopefully!) return the PDF in a byte-stream. The issue, though, is that when I submit the request, I'm only getting the page data back from the request instead of the data from the PDF. I'm not sure if it's because of how I'm doing my request or not...
Here's the code:
... // preceding code
DataOutputStream dos = new DataOutputStream(new FileOutputStream("test.pdf"));
StringBuilder httpTex = new StringBuilder();
httpTex.append(this.getTexCode(...)); // This appends the TeX code (nothing wrong here)
// Build the URL and HTTP request.
String texURL = "https://latexonline.cc/compile?text=";
String paramURL = URLEncoder.encode(httpTex.toString(), "UTF-8");
URL url = new URL(texURL + paramURL);
byte[] buffer = new byte[1024];
try {
InputStream is = url.openStream();
int bufferLen = -1;
while ((bufferLen = is.read(buffer)) > -1) {
this.getOutputStream().write(buffer, 0, bufferLen);
}
dos.close();
is.close();
} catch (IOException ex) {
ex.printStackTrace();
}
Edit: Here's the data I'm getting from the GET request:
https://pastebin.com/qYtGXUsd
Solved! I used a different API and it works perfectly.
https://github.com/YtoTech/latex-on-http
We are developing document microservice that needs to use Azure as a storage for file content. Azure Block Blob seemed like a reasonable choice. Document service has heap limited to 512MB (-Xmx512m).
I was not successful getting streaming file upload with limited heap to work using azure-storage-blob:12.10.0-beta.1 (also tested on 12.9.0).
Following approaches were attempted:
Copy-paste from the documentation using BlockBlobClient
BlockBlobClient blockBlobClient = blobContainerClient.getBlobClient("file").getBlockBlobClient();
File file = new File("file");
try (InputStream dataStream = new FileInputStream(file)) {
blockBlobClient.upload(dataStream, file.length(), true /* overwrite file */);
}
Result: java.io.IOException: mark/reset not supported - SDK tries to use mark/reset even though file input stream reports this feature as not supported.
Adding BufferedInputStream to mitigate mark/reset issue (per advice):
BlockBlobClient blockBlobClient = blobContainerClient.getBlobClient("file").getBlockBlobClient();
File file = new File("file");
try (InputStream dataStream = new BufferedInputStream(new FileInputStream(file))) {
blockBlobClient.upload(dataStream, file.length(), true /* overwrite file */);
}
Result: java.lang.OutOfMemoryError: Java heap space. I assume that SDK attempted to load all 1.17GB of file content into memory.
Replacing BlockBlobClient with BlobClient and removing heap size limitation (-Xmx512m):
BlobClient blobClient = blobContainerClient.getBlobClient("file");
File file = new File("file");
try (InputStream dataStream = new FileInputStream(file)) {
blobClient.upload(dataStream, file.length(), true /* overwrite file */);
}
Result: 1.5GB of heap memory used, all file content is loaded into memory + some buffering on the side of Reactor
Heap usage from VisualVM
Switch to streaming via BlobOutputStream:
long blockSize = DataSize.ofMegabytes(4L).toBytes();
BlockBlobClient blockBlobClient = blobContainerClient.getBlobClient("file").getBlockBlobClient();
// create / erase blob
blockBlobClient.commitBlockList(List.of(), true);
BlockBlobOutputStreamOptions options = (new BlockBlobOutputStreamOptions()).setParallelTransferOptions(
(new ParallelTransferOptions()).setBlockSizeLong(blockSize).setMaxConcurrency(1).setMaxSingleUploadSizeLong(blockSize));
try (InputStream is = new FileInputStream("file")) {
try (OutputStream os = blockBlobClient.getBlobOutputStream(options)) {
IOUtils.copy(is, os); // uses 8KB buffer
}
}
Result: file is corrupted during upload. Azure web portal shows 1.09GB instead of expected 1.17GB. Manual download of the file from Azure web portal confirms that file content was corrupted during upload. Memory footprint decreased significantly, but file corruption is a showstopper.
Problem: cannot come up with a working upload / download solution with small memory footprint
Any help would be greatly appreciated!
Pls try the code below to upload/download big files, I have tested on my side using a .zip file with size about 1.1 GB
For uploading files:
public static void uploadFilesByChunk() {
String connString = "<conn str>";
String containerName = "<container name>";
String blobName = "UploadOne.zip";
String filePath = "D:/temp/" + blobName;
BlobServiceClient client = new BlobServiceClientBuilder().connectionString(connString).buildClient();
BlobClient blobClient = client.getBlobContainerClient(containerName).getBlobClient(blobName);
long blockSize = 2 * 1024 * 1024; //2MB
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions()
.setBlockSizeLong(blockSize).setMaxConcurrency(2)
.setProgressReceiver(new ProgressReceiver() {
#Override
public void reportProgress(long bytesTransferred) {
System.out.println("uploaded:" + bytesTransferred);
}
});
BlobHttpHeaders headers = new BlobHttpHeaders().setContentLanguage("en-US").setContentType("binary");
blobClient.uploadFromFile(filePath, parallelTransferOptions, headers, null, AccessTier.HOT,
new BlobRequestConditions(), Duration.ofMinutes(30));
}
Memory footprint:
For downloading files:
public static void downLoadFilesByChunk() {
String connString = "<conn str>";
String containerName = "<container name>";
String blobName = "UploadOne.zip";
String filePath = "D:/temp/" + "DownloadOne.zip";
BlobServiceClient client = new BlobServiceClientBuilder().connectionString(connString).buildClient();
BlobClient blobClient = client.getBlobContainerClient(containerName).getBlobClient(blobName);
long blockSize = 2 * 1024 * 1024;
com.azure.storage.common.ParallelTransferOptions parallelTransferOptions = new com.azure.storage.common.ParallelTransferOptions()
.setBlockSizeLong(blockSize).setMaxConcurrency(2)
.setProgressReceiver(new com.azure.storage.common.ProgressReceiver() {
#Override
public void reportProgress(long bytesTransferred) {
System.out.println("dowloaded:" + bytesTransferred);
}
});
BlobDownloadToFileOptions options = new BlobDownloadToFileOptions(filePath)
.setParallelTransferOptions(parallelTransferOptions);
blobClient.downloadToFileWithResponse(options, Duration.ofMinutes(30), null);
}
Memory footprint:
Result:
I'm using a Java REST service for a file upload.
The file should land on my server, which it does, then move to Amazon S3 bucket.
The upload to the server is fine, but the 2nd call to another method does not work.
I assume because there is a timeout issue?
The code to move the file to amazon works in another app, but I am not able to get it working within my REST project.
Here is the method:
#POST
#Path("/upload")
#Consumes(MediaType.MULTIPART_FORM_DATA)
public Response uploadFile(#FormDataParam("file") InputStream inputStream,
#FormDataParam("file") FormDataContentDisposition file, #FormDataParam("filename") String filename){
Logger log = Logger.getLogger("Mike");
String response = "";
File f = null;
try {
final String FILE_DESTINATION = "C://uploads//" + file.getFileName();
f = new File(FILE_DESTINATION);
OutputStream outputStream = new FileOutputStream(f);
int size = 0;
byte[] bytes = new byte[1024];
while ((size = inputStream.read(bytes)) != -1) {
outputStream.write(bytes, 0, size);
}
outputStream.flush();
outputStream.close();
log.info("upload complete for initial file!");
//move file to Amazon S3 Bucket.
AmazonS3 s3 = new AmazonS3Client(
new ClasspathPropertiesFileCredentialsProvider());
log.info("trying put request");
PutObjectRequest request = new PutObjectRequest("site.address.org","/pdf/PDF_Web_Service/work/"+f.getName(),f);
log.info(f.getName());
log.info(f.getAbsolutePath());
s3.putObject(request);
log.info("put request complete");
response = "File uploaded " + FILE_DESTINATION;
} catch (Exception e) {
e.printStackTrace();
}
return Response.status(200).entity(response).build();
}
Specifically, here is the part not working. I am not getting any log info either:
//move file to Amazon S3 Bucket.
AmazonS3 s3 = new AmazonS3Client(
new ClasspathPropertiesFileCredentialsProvider());
log.info("trying put request");
PutObjectRequest request = new PutObjectRequest("site.address.org","/pdf/PDF_Web_Service/work/"+f.getName(),f);
log.info(f.getName()); log.info(f.getAbsolutePath());
s3.putObject(request); log.info("put request complete");
Michael,
If it's a time-out issue, it's common practice to use guava's Listenable Future to chain your tasks together. What your web sequence will look like then is:
a) Client sends file
b) Server responds with 200 once file completes uploading.
c) Once the server is done loading the file, chain the future to then upload to S3.
Chaining listenable futures is common practice to separate functionality and ensure a time out doesn't occur by breaking up your code and essentially pipe-lining it.
Please let me know if you have any questions!
I moved the Amazon code into the try block and now it works.
I am using the following code from an android application to upload a blob to Azure Blob Storage. Note: the sasUrl parameter below is a signed url acquired from my web service :
// upload file to azure blob storage
private static Boolean upload(String sasUrl, String filePath, String mimeType) {
try {
// Get the file data
File file = new File(filePath);
if (!file.exists()) {
return false;
}
String absoluteFilePath = file.getAbsolutePath();
FileInputStream fis = new FileInputStream(absoluteFilePath);
int bytesRead = 0;
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte[] b = new byte[1024];
while ((bytesRead = fis.read(b)) != -1) {
bos.write(b, 0, bytesRead);
}
fis.close();
byte[] bytes = bos.toByteArray();
// Post our image data (byte array) to the server
URL url = new URL(sasUrl.replace("\"", ""));
HttpURLConnection urlConnection = (HttpURLConnection) url.openConnection();
urlConnection.setDoOutput(true);
urlConnection.setConnectTimeout(15000);
urlConnection.setReadTimeout(15000);
urlConnection.setRequestMethod("PUT");
urlConnection.addRequestProperty("Content-Type", mimeType);
urlConnection.setRequestProperty("Content-Length", "" + bytes.length);
urlConnection.setRequestProperty("x-ms-blob-type", "BlockBlob");
// Write file data to server
DataOutputStream wr = new DataOutputStream(urlConnection.getOutputStream());
wr.write(bytes);
wr.flush();
wr.close();
int response = urlConnection.getResponseCode();
if (response == 201 && urlConnection.getResponseMessage().equals("Created")) {
return true;
}
} catch (Exception e) {
e.printStackTrace();
}
return false;
}
The code is working fine for small blobs but when a blob reaches a certain size depending on the phone I am testing with, I start to get out of memory exceptions. I would like to split the blobs and upload them in blocks. However, all the examples I find on the web are C# based and are using the Storage Client library. I am looking for a Java/Android example that uploads a blob in blocks using the Azure Storage Rest API.
There is an Azure Storage Android library published here. A basic blob storage example is in the samples folder. The method you’d probably like to use is uploadFromFile in the blob class. This will, by default attempt to put the blob in a single put if the size is less than 64MB and otherwise send the blob in 4MB blocks. If you’d like to reduce the 64MB limit, you can set the singleBlobPutThresholdInBytes property on the BlobRequestOptions object of either the CloudBlobClient (which will affect all requests) or passed to the uploadFromFile method (to affect only that request). The storage library includes many convenient features such as automated retries and maximum execution timeout across the block put requests which are all configurable.
If you’d still like to use a more manual approach, the PutBlock and Put Block List API references are here and provide generic, cross-language documentation. These have nice wrappers in the CloudBlockBlob class of the Azure Storage Android library called uploadBlock and commitBlockList which may save you a lot of time in manual request construction and can provide some of the aforementioned conveniences.
I am sending images from my android client to java jersey restful service and I succeded in doing that.But my issue is when I try to send large images say > 1MB its consumes more time so I like to send image in CHUNKS can anyone help me in doing this.How to send(POST) image stream in CHUNKS to server
references used :
server code & client call
server function name
/*** SERVER SIDE CODE****/
#POST
#Path("/upload/{attachmentName}")
#Consumes(MediaType.APPLICATION_OCTET_STREAM)
public void uploadAttachment(
#PathParam("attachmentName") String attachmentName,
#FormParam("input") InputStream attachmentInputStream) {
InputStream content = request.getInputStream();
// do something better than this
OutputStream out = new FileOutputStream("content.txt");
byte[] buffer = new byte[1024];
int len;
while ((len = in.read(buffer)) != -1) {
// whatever processing you want here
out.write(buffer, 0, len);
}
out.close();
return Response.status(201).build();
}
/**********************************************/
/**
CLIENT SIDE CODE
**/
// .....
client.setChunkedEncodingSize(1024);
WebResource rootResource = client.resource("your-server-base-url");
File file = new File("your-file-path");
InputStream fileInStream = new FileInputStream(file);
String contentDisposition = "attachment; filename=\"" + file.getName() + "\"";
ClientResponse response = rootResource.path("attachment").path("upload").path("your-file-name")
.type(MediaType.APPLICATION_OCTET_STREAM).header("Content-Disposition", contentDisposition)
.post(ClientResponse.class, fileInStream);
You should split the file in the client and restore part of the file in the server.
and after that you should merge the files together. Take a look at split /merge file on coderanch
Enjoy ! :)
Another path is available, if you don't want to code too much consider using :
file upload apache that is great ! :)