InetAddress.getLocalHost().getHostName() different behavior between JDK 11 and JDK 8 - java

I wrote a simple java program to basically run:
System.out.println(InetAddress.getLocalHost().getHostName());
If I compile it and run it on Java 1.7.231 or 1.8.221 On RHEL 7.7, it returns the FQDN (computer.domain.com), but ON THE SAME SERVER, compile it in RHEL JDK 11.0.2 it returns only the server name.
As I understand it should do a reverse DNS lookup (basically a hostname -f) but with JDK 11 the behavior is definitely different. Any idea why is this happening?

This might be the same problem as reported here: InetAddress.getLocalhost() does not give same result in java7 and java8.
It boils down to a change in the JDK:
Since: http://hg.openjdk.java.net/jdk8/jdk8/jdk/rev/81987765cb81 was pushed, we call getaddrinfo / getnameinfo to get a local host name instead of the older (obseleted) gethostbyname_r/gethostbyaddr_r calls.
The newer calls respect the localhosts /etc/nsswitch.conf configuration files. In the case of this machine that file tells these calls to look in files before referencing other naming services.
Since the /etc/hosts file contains an explicit mapping for this hostname / IP combination, that is what is returned.
In the older JDK's the gethostbyname_r actually ignored the local machines settings and immediately delegated to the naming service.

Under the hood, in order to obtain the localhost name the SDK perform a native invocation to the underlying operating system.
The C function which is involved is getLocalHostName. For both IP version 4 and 6 you can find the appropriate implementation: basically it is the same source code with minimal changes to take into consideration if you are using IP version 6.
Let's assume for instance the code for IP version 4.
For Java 11, the corresponding native code is implemented in Inet4AddressImpl.c. This is how getLocalHostname is implemented:
/*
* Class: java_net_Inet4AddressImpl
* Method: getLocalHostName
* Signature: ()Ljava/lang/String;
*/
JNIEXPORT jstring JNICALL
Java_java_net_Inet4AddressImpl_getLocalHostName(JNIEnv *env, jobject this) {
char hostname[NI_MAXHOST + 1];
hostname[0] = '\0';
if (gethostname(hostname, sizeof(hostname)) != 0) {
strcpy(hostname, "localhost");
} else {
#if defined(__solaris__)
// try to resolve hostname via nameservice
// if it is known but getnameinfo fails, hostname will still be the
// value from gethostname
struct addrinfo hints, *res;
// make sure string is null-terminated
hostname[NI_MAXHOST] = '\0';
memset(&hints, 0, sizeof(hints));
hints.ai_flags = AI_CANONNAME;
hints.ai_family = AF_INET;
if (getaddrinfo(hostname, NULL, &hints, &res) == 0) {
getnameinfo(res->ai_addr, res->ai_addrlen, hostname, sizeof(hostname),
NULL, 0, NI_NAMEREQD);
freeaddrinfo(res);
}
#else
// make sure string is null-terminated
hostname[NI_MAXHOST] = '\0';
#endif
}
return (*env)->NewStringUTF(env, hostname);
}
As you can see, when using something different from Solaris, it seems that the code only relies on gethostname to obtain the required value. This restriction was introduced in this commit in the context of this bug.
Here you can see the analogous IP 4 version native source code implementation for Java 8.
In that source code you can find several differences with the previous one for Java 11.
First, the code is divided in two sections depending on whether the following definition applies:
#if defined(__GLIBC__) || (defined(__FreeBSD__) && (__FreeBSD_version >= 601104))
#define HAS_GLIBC_GETHOSTBY_R 1
#endif
#if defined(_ALLBSD_SOURCE) && !defined(HAS_GLIBC_GETHOSTBY_R)
...
#else /* defined(_ALLBSD_SOURCE) && !defined(HAS_GLIBC_GETHOSTBY_R) */
...
and the implementation provided for getLocalHostName is different if the condition applies or not.
In my opinion, in the case of Redhat the condition does not apply and, as a consequence, the following code is the one used at runtime:
/************************************************************************
* Inet4AddressImpl
*/
/*
* Class: java_net_Inet4AddressImpl
* Method: getLocalHostName
* Signature: ()Ljava/lang/String;
*/
JNIEXPORT jstring JNICALL
Java_java_net_Inet4AddressImpl_getLocalHostName(JNIEnv *env, jobject this) {
char hostname[NI_MAXHOST+1];
hostname[0] = '\0';
if (JVM_GetHostName(hostname, sizeof(hostname))) {
/* Something went wrong, maybe networking is not setup? */
strcpy(hostname, "localhost");
} else {
struct addrinfo hints, *res;
int error;
hostname[NI_MAXHOST] = '\0';
memset(&hints, 0, sizeof(hints));
hints.ai_flags = AI_CANONNAME;
hints.ai_family = AF_INET;
error = getaddrinfo(hostname, NULL, &hints, &res);
if (error == 0) {/* host is known to name service */
getnameinfo(res->ai_addr,
res->ai_addrlen,
hostname,
NI_MAXHOST,
NULL,
0,
NI_NAMEREQD);
/* if getnameinfo fails hostname is still the value
from gethostname */
freeaddrinfo(res);
}
}
return (*env)->NewStringUTF(env, hostname);
}
As you can see, this last implementation call gethostname in first place as well, although indirectly, using JVM_GetHostName, wrapped in C++ code:
JVM_LEAF(int, JVM_GetHostName(char* name, int namelen))
JVMWrapper("JVM_GetHostName");
return os::get_host_name(name, namelen);
JVM_END
Depending on the actual OS, os::get_host_name will translate to different functions. For linux it will invoke gethostname:
inline int os::get_host_name(char* name, int namelen) {
return ::gethostname(name, namelen);
}
If the call to gethostname succeeds, getaddrinfo is invoked with the host name returned by gethostname. If in turn, this last call succeeds, getnameinfo is invoked, with the address returned by getaddrinfo to get the final hostname.
In a certain way it seems strange to me, I feel I'm missing something, but these differences can be very likely the cause of the different behavior you experienced; the hypothesis can be tested using the provided native code and debugging the results obtained for your system.

This answer from oracle documentation may help you :
On Red Hat Linux installations InetAddress.getLocalHost() may return an InetAddress corresponding to the loopback address (127.0.0.1). This arises because the default installation creates an association in /etc/hosts between the hostname of the machine and the loopback address. To ensure that InetAddress.getLocalHost() returns the actual host address, update the /etc/hosts file or the name service configuration file (/etc/nsswitch.conf) to query dns or nis before searching hosts.
Link : https://docs.oracle.com/javase/7/docs/technotes/guides/idl/jidlFAQ.html
Similar bug on JDK 1.7
https://bugs.java.com/bugdatabase/view_bug.do?bug_id=7166687

There's a possibility that on JDK 11 the localhostname might have a builtin predefined JDK keyword which can be called when retrieving the localhostname, and you might be overriding the system predefined keyword with your own variable call in which you are calling the localhostname, because sometime we accidentally override a builtin variable with our own userdefined varibale which cause the original builtin keyword to loose its value which in returns shows empty or some other results
This might not be the best answer for your question but I suggest you should check out JDK builtin keywords and RHEL linux builtin keywords for Inet call for returning localhostname in result

Related

Memory-mapping a file in Windows with SHARE attribute (so file is not locked against deletion)

Is there any way to map a file's content into memory in Windows that does not hold a lock on the file (in particular, such that the file can be deleted while still mmap'd)?
The Java NIO libraries mmap files in Windows in such a way that the mapped file cannot be deleted while there is any non garbage collected MappedByteBuffer reference left in the heap. The JDK team claim that this is a limitation of Windows, but only when files are mmap'd, not when they are opened as regular files:
https://mail.openjdk.java.net/pipermail/nio-dev/2019-January/005698.html
(Obviously if a file is deleted while mmap'd, exactly what should happen to the mmap'd region is debatable in the world of Windows file semantics, though it's well-defined in Linux.)
For reference, the inability to delete files while they are memory mapped (or not yet garbage collected) creates a lot of problems in Java:
http://www.mapdb.org/blog/mmap_files_alloc_and_jvm_crash/
And there are security reasons why an unmap operation is not supported:
https://bugs.openjdk.java.net/browse/JDK-4724038
UPDATE: See also: How to unmap an mmap'd file by replacing with a mapping to empty pages
as noted #eryksun we can delete mapped file, if section(file mapping) was created without SEC_IMAGE attribute in 2 ways:
open file with FILE_FLAG_DELETE_ON_CLOSE flag - The file is
to be deleted immediately after all of its handles are closed, which
includes the specified handle and any other open or duplicated
handles. or we can use NtOpenFile or NtCreateFile
call with flag FILE_DELETE_ON_CLOSE.
by call ZwDeleteFile. really internal NtDeleteFile open
file with FILE_DELETE_ON_CLOSE flag and special internal
disposition DeleteOnly = TRUE. this is make call more
efficient compare normal open file and then close it handle.
in code this look like.
#ifndef FILE_SHARE_VALID_FLAGS
#define FILE_SHARE_VALID_FLAGS 0x00000007
#endif
NTSTATUS Delete1(PCWSTR FileName)
{
HANDLE hFile = CreateFile(FileName, DELETE, FILE_SHARE_VALID_FLAGS, 0, OPEN_EXISTING, FILE_FLAG_DELETE_ON_CLOSE, 0);
if (hFile == INVALID_HANDLE_VALUE)
{
return RtlGetLastNtStatus();
}
CloseHandle(hFile);
return 0;
}
NTSTATUS Delete2(PCWSTR FileName)
{
UNICODE_STRING ObjectName;
if (RtlDosPathNameToNtPathName_U(FileName, &ObjectName, 0, 0))
{
OBJECT_ATTRIBUTES oa = { sizeof(oa), 0, &ObjectName };
NTSTATUS status = ZwDeleteFile(&oa);
RtlFreeUnicodeString(&ObjectName);
return status;
}
return STATUS_UNSUCCESSFUL;
}
note that call DeleteFileW fail here with status - STATUS_CANNOT_DELETE. i recomendate call RtlGetLastNtStatus() here instead GetLastError() because win32 mapping NTSTATUS to error code is not injective and frequently lost valuable information. say STATUS_CANNOT_DELETE mapped to ERROR_ACCESS_DENIED. but exist huge another NTSATUS codes which also mapped to ERROR_ACCESS_DENIED. the ERROR_ACCESS_DENIED not only STATUS_ACCESS_DENIED (real access denied). got STATUS_CANNOT_DELETE much more informative here compare ERROR_ACCESS_DENIED. the RtlGetLastNtStatus have exactly same signature as GetLastError and exported from ntdll.dll ( so include ntdll.lib or ntdllp.lib)
extern "C" NTSYSCALLAPI NTSTATUS NTAPI RtlGetLastNtStatus();

Jpcap crashes when trying to open device

I use Jpcap in order to create ARP requests but when calling the method JpcapCaptor.openDevice(interface,snaplen,promisc,to_ms);, I get the following error :
java.lang.NoSuchMethodError: setPacketValue
at jpcap.JpcapCaptor.nativeOpenLive(Native Method)
at jpcap.JpcapCaptor.openDevice(JpcapCaptor.java:61)
The jpcap dll file I'm using was compiled from the source in order to work on Windows 64 bits. Is there anyway I can solve that weird problem ?
After having looked in the jpcap.dll file source code, I see that the following code is used in the openDevice method (the one that crashes) of the JpcapCaptor.java file:
JpcapCaptor jpcap = new JpcapCaptor();
String ret = jpcap.nativeOpenLive(intrface.name, snaplen, (promisc ? 1 : 0), to_ms);
According to the compiler, this is the second line of this piece of code that crashes. So I looked in the nativeOpenLive method which comes from the file JpcapCaptor.c which starts with:
JNIEXPORT jstring JNICALL
Java_jpcap_JpcapCaptor_nativeOpenLive(JNIEnv *env,jobject obj,jstring device,jint snaplen,jint promisc,jint to_ms){
char *dev;
jint id;
set_Java_env(env);
However, in this last function (set_Java_env), I found the call to the setPacketValue method in the following form: setPacketValueMID=(*env)->GetMethodID(env,Packet,"setPacketValue","(JJII)V");
Having only very weak bases in C, I would like to know the meaning of these different methods and, if possible, where the error comes from.
The source is available at the following address: https://github.com/jovigb/jpcap-x64/blob/master/src

Is there a way to determine whether the code is deposed on a physical box or a virtual box in Java [duplicate]

I need to detect whether my application is running within a virtualized OS instance or not.
I've found an article with some useful information on the topic. The same article appears in multiple places, I'm unsure of the original source. VMware implements a particular invalid x86 instruction to return information about itself, while VirtualPC uses a magic number and I/O port with an IN instruction.
This is workable, but appears to be undocumented behavior in both cases. I suppose a future release of VMWare or VirtualPC might change the mechanism. Is there a better way? Is there a supported mechanism for either product?
Similarly, is there a way to detect Xen or VirtualBox?
I'm not concerned about cases where the platform is deliberately trying to hide itself. For example, honeypots use virtualization but sometimes obscure the mechanisms that malware would use to detect it. I don't care that my app would think it is not virtualized in these honeypots, I'm just looking for a "best effort" solution.
The application is mostly Java, though I'm expecting to use native code plus JNI for this particular function. Windows XP/Vista support is most important, though the mechanisms described in the referenced article are generic features of x86 and don't rely on any particular OS facility.
Have you heard about blue pill, red pill?. It's a technique used to see if you are running inside a virtual machine or not. The origin of the term stems from the matrix movie where Neo is offered a blue or a red pill (to stay inside the matrix = blue, or to enter the 'real' world = red).
The following is some code that will detect whether you are running inside 'the matrix' or not:
(code borrowed from this site which also contains some nice information about the topic at hand):
int swallow_redpill () {
unsigned char m[2+4], rpill[] = "\x0f\x01\x0d\x00\x00\x00\x00\xc3";
*((unsigned*)&rpill[3]) = (unsigned)m;
((void(*)())&rpill)();
return (m[5]>0xd0) ? 1 : 0;
}
The function will return 1 when you are running inside a virutal machine, and 0 otherwise.
Under Linux I used the command: dmidecode ( I have it both on CentOS and Ubuntu )
from the man:
dmidecode is a tool for dumping a
computer's DMI (some say SMBIOS) table
contents in a human-readable format.
So I searched the output and found out its probably Microsoft Hyper-V
Handle 0x0001, DMI type 1, 25 bytes
System Information
Manufacturer: Microsoft Corporation
Product Name: Virtual Machine
Version: 5.0
Serial Number: some-strings
UUID: some-strings
Wake-up Type: Power Switch
Handle 0x0002, DMI type 2, 8 bytes
Base Board Information
Manufacturer: Microsoft Corporation
Product Name: Virtual Machine
Version: 5.0
Serial Number: some-strings
Another way is to search to which manufacturer the MAC address of eth0 is related to: http://www.coffer.com/mac_find/
If it return Microsoft, vmware & etc.. then its probably a virtual server.
VMware has a Mechanisms to determine if software is running in a VMware virtual machine Knowledge base article which has some source code.
Microsoft also has a page on "Determining If Hypervisor Is Installed". MS spells out this requirement of a hypervisor in the IsVM TEST" section of their "Server Virtualization Validation Test" document
The VMware and MS docs both mention using the CPUID instruction to check the hypervisor-present bit (bit 31 of register ECX)
The RHEL bugtracker has one for "should set ISVM bit (ECX:31) for CPUID leaf 0x00000001" to set bit 31 of register ECX under the Xen kernel.
So without getting into vendor specifics it looks like you could use the CPUID check to know if you're running virtually or not.
No. This is impossible to detect with complete accuracy. Some virtualization systems, like QEMU, emulate an entire machine down to the hardware registers. Let's turn this around: what is it you're trying to do? Maybe we can help with that.
I think that going forward, relying on tricks like the broken SIDT virtualization is not really going to help as the hardware plugs all the holes that the weird and messy x86 architecture have left. The best would be to lobby the Vm providers for a standard way to tell that you are on a VM -- at least for the case when the user has explicitly allowed that. But if we assume that we are explicitly allowing the VM to be detected, we can just as well place visible markers in there, right? I would suggest just updating the disk on your VMs with a file telling you that you are on a VM -- a small text file in the root of the file system, for example. Or inspect the MAC of ETH0, and set that to a given known string.
On virtualbox, assuming you have control over the VM guest and you have dmidecode, you can use this command:
dmidecode -s bios-version
and it will return
VirtualBox
On linux systemd provides a command for detecting if the system is running as a virtual machine or not.
Command:
$ systemd-detect-virt
If the system is virtualized then it outputs name of the virtualization softwarwe/technology.
If not then it outputs none
For instance if the system is running KVM then:
$ systemd-detect-virt
kvm
You don't need to run it as sudo.
I'd like to recommend a paper posted on Usenix HotOS '07, Comptibility is Not Transparency: VMM Detection Myths and Realities, which concludes several techniques to tell whether the application is running in a virtualized environment.
For example, use sidt instruction as redpill does(but this instruction can also be made transparent by dynamic translation), or compare the runtime of cpuid against other non-virtualized instructions.
While installing the newes Ubuntu I discovered the package called imvirt. Have a look at it at http://micky.ibh.net/~liske/imvirt.html
This C function will detect VM Guest OS:
(Tested on Windows, compiled with Visual Studio)
#include <intrin.h>
bool isGuestOSVM()
{
unsigned int cpuInfo[4];
__cpuid((int*)cpuInfo,1);
return ((cpuInfo[2] >> 31) & 1) == 1;
}
Under Linux, you can report on /proc/cpuinfo. If it's in VMware, it usually comes-up differently than if it is on bare metal, but not always. Virtuozzo shows a pass-through to the underlying hardware.
Try by reading the SMBIOS structures, especially the structs with the BIOS information.
In Linux you can use the dmidecode utility to browse the information.
Check the tool virt-what. It uses previously mentioned dmidecode to determine if you are on a virtualized host and the type.
I Tried A Different approach suggested by my friend.Virtual Machines run on VMWARE doesnt have CPU TEMPERATURE property. i.e They Dont Show The Temperature of the CPU. I am using CPU Thermometer Application For Checking The CPU Temperature.
(Windows Running In VMWARE)
(Windows Running On A Real CPU)
So I Code a Small C Programme to detect the temperature Senser
#include "stdafx.h"
#define _WIN32_DCOM
#include <iostream>
using namespace std;
#include <comdef.h>
#include <Wbemidl.h>
#pragma comment(lib, "wbemuuid.lib")
int main(int argc, char **argv)
{
HRESULT hres;
// Step 1: --------------------------------------------------
// Initialize COM. ------------------------------------------
hres = CoInitializeEx(0, COINIT_MULTITHREADED);
if (FAILED(hres))
{
cout << "Failed to initialize COM library. Error code = 0x"
<< hex << hres << endl;
return 1; // Program has failed.
}
// Step 2: --------------------------------------------------
// Set general COM security levels --------------------------
hres = CoInitializeSecurity(
NULL,
-1, // COM authentication
NULL, // Authentication services
NULL, // Reserved
RPC_C_AUTHN_LEVEL_DEFAULT, // Default authentication
RPC_C_IMP_LEVEL_IMPERSONATE, // Default Impersonation
NULL, // Authentication info
EOAC_NONE, // Additional capabilities
NULL // Reserved
);
if (FAILED(hres))
{
cout << "Failed to initialize security. Error code = 0x"
<< hex << hres << endl;
CoUninitialize();
return 1; // Program has failed.
}
// Step 3: ---------------------------------------------------
// Obtain the initial locator to WMI -------------------------
IWbemLocator *pLoc = NULL;
hres = CoCreateInstance(
CLSID_WbemLocator,
0,
CLSCTX_INPROC_SERVER,
IID_IWbemLocator, (LPVOID *)&pLoc);
if (FAILED(hres))
{
cout << "Failed to create IWbemLocator object."
<< " Err code = 0x"
<< hex << hres << endl;
CoUninitialize();
return 1; // Program has failed.
}
// Step 4: -----------------------------------------------------
// Connect to WMI through the IWbemLocator::ConnectServer method
IWbemServices *pSvc = NULL;
// Connect to the root\cimv2 namespace with
// the current user and obtain pointer pSvc
// to make IWbemServices calls.
hres = pLoc->ConnectServer(
_bstr_t(L"ROOT\\CIMV2"), // Object path of WMI namespace
NULL, // User name. NULL = current user
NULL, // User password. NULL = current
0, // Locale. NULL indicates current
NULL, // Security flags.
0, // Authority (for example, Kerberos)
0, // Context object
&pSvc // pointer to IWbemServices proxy
);
if (FAILED(hres))
{
cout << "Could not connect. Error code = 0x"
<< hex << hres << endl;
pLoc->Release();
CoUninitialize();
return 1; // Program has failed.
}
cout << "Connected to ROOT\\CIMV2 WMI namespace" << endl;
// Step 5: --------------------------------------------------
// Set security levels on the proxy -------------------------
hres = CoSetProxyBlanket(
pSvc, // Indicates the proxy to set
RPC_C_AUTHN_WINNT, // RPC_C_AUTHN_xxx
RPC_C_AUTHZ_NONE, // RPC_C_AUTHZ_xxx
NULL, // Server principal name
RPC_C_AUTHN_LEVEL_CALL, // RPC_C_AUTHN_LEVEL_xxx
RPC_C_IMP_LEVEL_IMPERSONATE, // RPC_C_IMP_LEVEL_xxx
NULL, // client identity
EOAC_NONE // proxy capabilities
);
if (FAILED(hres))
{
cout << "Could not set proxy blanket. Error code = 0x"
<< hex << hres << endl;
pSvc->Release();
pLoc->Release();
CoUninitialize();
return 1; // Program has failed.
}
// Step 6: --------------------------------------------------
// Use the IWbemServices pointer to make requests of WMI ----
// For example, get the name of the operating system
IEnumWbemClassObject* pEnumerator = NULL;
hres = pSvc->ExecQuery(
bstr_t("WQL"),
bstr_t(L"SELECT * FROM Win32_TemperatureProbe"),
WBEM_FLAG_FORWARD_ONLY | WBEM_FLAG_RETURN_IMMEDIATELY,
NULL,
&pEnumerator);
if (FAILED(hres))
{
cout << "Query for operating system name failed."
<< " Error code = 0x"
<< hex << hres << endl;
pSvc->Release();
pLoc->Release();
CoUninitialize();
return 1; // Program has failed.
}
// Step 7: -------------------------------------------------
// Get the data from the query in step 6 -------------------
IWbemClassObject *pclsObj = NULL;
ULONG uReturn = 0;
while (pEnumerator)
{
HRESULT hr = pEnumerator->Next(WBEM_INFINITE, 1,
&pclsObj, &uReturn);
if (0 == uReturn)
{
break;
}
VARIANT vtProp;
// Get the value of the Name property
hr = pclsObj->Get(L"SystemName", 0, &vtProp, 0, 0);
wcout << " OS Name : " << vtProp.bstrVal << endl;
VariantClear(&vtProp);
VARIANT vtProp1;
VariantInit(&vtProp1);
pclsObj->Get(L"Caption", 0, &vtProp1, 0, 0);
wcout << "Caption: " << vtProp1.bstrVal << endl;
VariantClear(&vtProp1);
pclsObj->Release();
}
// Cleanup
// ========
pSvc->Release();
pLoc->Release();
pEnumerator->Release();
CoUninitialize();
return 0; // Program successfully completed.
}
Output On a Vmware Machine
Output On A Real Cpu
I use this C# class to detect if the Guest OS is running inside a virtual environment (windows only):
sysInfo.cs
using System;
using System.Management;
using System.Text.RegularExpressions;
namespace ConsoleApplication1
{
public class sysInfo
{
public static Boolean isVM()
{
bool foundMatch = false;
ManagementObjectSearcher search1 = new ManagementObjectSearcher("select * from Win32_BIOS");
var enu = search1.Get().GetEnumerator();
if (!enu.MoveNext()) throw new Exception("Unexpected WMI query failure");
string biosVersion = enu.Current["version"].ToString();
string biosSerialNumber = enu.Current["SerialNumber"].ToString();
try
{
foundMatch = Regex.IsMatch(biosVersion + " " + biosSerialNumber, "VMware|VIRTUAL|A M I|Xen", RegexOptions.IgnoreCase);
}
catch (ArgumentException ex)
{
// Syntax error in the regular expression
}
ManagementObjectSearcher search2 = new ManagementObjectSearcher("select * from Win32_ComputerSystem");
var enu2 = search2.Get().GetEnumerator();
if (!enu2.MoveNext()) throw new Exception("Unexpected WMI query failure");
string manufacturer = enu2.Current["manufacturer"].ToString();
string model = enu2.Current["model"].ToString();
try
{
foundMatch = Regex.IsMatch(manufacturer + " " + model, "Microsoft|VMWare|Virtual", RegexOptions.IgnoreCase);
}
catch (ArgumentException ex)
{
// Syntax error in the regular expression
}
return foundMatch;
}
}
}
Usage:
if (sysInfo.isVM()) {
Console.WriteLine("VM FOUND");
}
I came up with universal way to detect every type of windows virtual machine with just 1 line of code. It supports win7--10 (xp not tested yet).
Why we need universal way ?
Most common used way is to search and match vendor values from win32. But what if there are 1000+ VM manufacturers ? then you would have to write a code to match 1000+ VM signatures. But its time waste. Even after sometime, there would be new other VMs launched and your script would be wasted.
Background
I worked on it for many months. I done many tests upon which I observed that:
win32_portconnector always null and empty on VMs. Please see full report
//asked at: https://stackoverflow.com/q/64846900/14919621
what win32_portconnector is used for ? This question have 3 parts.
1) What is the use case of win32_portconnector ? //https://learn.microsoft.com/en-us/windows/win32/cimwin32prov/win32-portconnector
2) Can I get state of ports using it like Mouse cable, charger, HDMI cables etc ?
3) Why VM have null results on this query : Get-WmiObject Win32_PortConnector ?
On VM:
PS C:\Users\Administrator> Get-WmiObject Win32_PortConnector
On Real environment:
PS C:\Users\Administrator> Get-WmiObject Win32_PortConnector
Tag : Port Connector 0
ConnectorType : {23, 3}
SerialNumber :
ExternalReferenceDesignator :
PortType : 2
Tag : Port Connector 1
ConnectorType : {21, 2}
SerialNumber :
ExternalReferenceDesignator :
PortType : 9
Tag : Port Connector 2
ConnectorType : {64}
SerialNumber :
ExternalReferenceDesignator :
PortType : 16
Tag : Port Connector 3
ConnectorType : {22, 3}
SerialNumber :
ExternalReferenceDesignator :
PortType : 28
Tag : Port Connector 4
ConnectorType : {54}
SerialNumber :
ExternalReferenceDesignator :
PortType : 17
Tag : Port Connector 5
ConnectorType : {38}
SerialNumber :
ExternalReferenceDesignator :
PortType : 30
Tag : Port Connector 6
ConnectorType : {39}
SerialNumber :
ExternalReferenceDesignator :
PortType : 31
Show me Code
Based upon these tests, I have made an tiny program which can detect windows VMs.
//#graysuit
//https://graysuit.github.io
//https://github.com/Back-X/anti-vm
using System;
using System.Windows.Forms;
public class Universal_VM_Detector
{
static void Main()
{
if((new System.Management.ManagementObjectSearcher("SELECT * FROM Win32_PortConnector")).Get().Count == 0)
{
MessageBox.Show("VM detected !");
}
else
{
MessageBox.Show("VM NOT detected !");
}
}
}
You can read code or get compiled executable.
Stability
It is tested on many environments and is very stable.
Detects Visrtualbox
Detects Vmware
Detects Windows Server
Detects RDP
Detects Virustotal
Detects any.run
etc...

Starting JVM for Inline

I have a Perl script that uses Inline::Java and just has to fork (it is a server and I want it to handle multiple connections simultaneously).
So I wanted to implement this solution which makes use of a shared JVM with SHARED_JVM => 1. Since the JVM is not shutdown when the script exits, I want to reuse it with START_JVM => 0. But since it might just be the first time I start the server, I would also like to have a BEGIN block make sure a JVM is running before calling use Inline.
My question is very simple, but I couldn't find any answer on the web: How do I simply start a JVM? I've looked at man java and there seems to be just no option that means "start and just listen for connections".
Here is a simplified version of what I'm trying to do in Perl, if this helps:
BEGIN {
&start_jvm unless &jvm_is_running;
}
use Inline (
Java => 'STUDY',
SHARED_JVM => 1,
START_JVM => 0,
STUDY => ['JavaStuff'],
);
if (fork) {
JavaStuff->do_something;
wait;
}
else {
Inline::Java::reconnect_JVM();
JavaStuff->do_something;
}
What I need help with is writing the start_jvm subroutine.
If you've got a working jvm_is_running function, just use it to determine whether Inline::Java should start the JVM.
use Inline (
Java => 'STUDY',
SHARED_JVM => 1,
START_JVM => jvm_is_running() ? 0 : 1,
STUDY => ['JavaStuff'],
);
Thanks to details provided by tobyink, I am able to answer my own question, which was based on a the erroneous assumption that the JVM itself provides a server and a protocole.
As a matter of fact, one major component of Inline::Java is a server, written in Java, which handles request by the Inline::Java::JVM client, written in Perl.
Therefore, the command-line to launch the server is:
$ java org.perl.inline.java.InlineJavaServer <DEBUG> <HOST> <PORT> <SHARED_JVM> <PRIVATE> <NATIVE_DOUBLES>
where all parameters correspond to configuration options described in the Inline::Java documentation.
Therefore, in my case, the start_jvm subroutine would be:
sub start_jvm {
system
'java org.perl.inline.java.InlineJavaServer 0 localhost 7891 true false false';
}
(Not that it should be defined: tobyink's solution, while it did not directly address the question I asked, is much better.)
As for the jvm_is_running subroutine, this is how I defined it:
use Proc::ProcessTable;
use constant {
JAVA => 'java',
INLINE_SERVER => 'org.perl.inline.java.InlineJavaServer',
};
sub jvm_is_running {
my $pt = new Proc::ProcessTable;
return grep {
$_->fname eq JAVA && ( split /\s/, $_->cmndline )[1] eq INLINE_SERVER
} #{ $pt->table };
}

How to fix JNI crash on env->NewObject?

I'm trying to use Clang via JNI (Clang C-API).
One moment after few iteractions it just stop creating new objects and crashes:
map method
0 args:
create Method instance 0x7fa26ba23c90 0x7fa26ba2a0c0 libclang: crash
detected during indexing source file: { 'source_filename' :
'/Users/asmirnov/Documents/dev/src/clang_jni/mac/test/TestFile.h'
'command_line_args' : ['-c', '-x', 'c++'], 'unsaved_files' : [],
'options' : 0, }
The code is pretty straight-forward:
mapMethod(JNIEnv *env, const CXIdxDeclInfo *info) {
debug("map method");
int numArgs = clang_Cursor_getNumArguments(info->cursor);
debug(" %i args:", numArgs);
debug("create Method instance %p %p", MethodClass, MethodConstructor);
jobject result = env->NewObject(MethodClass, MethodConstructor);
debug("create Method params instance");
Method class and constructor is found and registered as globals correctly (seems to be) and it works few iterations:
// method
MethodClass = (jclass)env->NewGlobalRef(env->FindClass("name/antonsmirnov/clang/dto/index/Method"));
debug(MethodClass != NULL ? "found MethodClass" : "not found MethodClass");
MethodConstructor = env->GetMethodID(MethodClass, "<init>", "()V");
debug(MethodConstructor != NULL ? "found MethodConstructor" : "not found MethodConstructor");
I've read some "jni tips and tricks" articles and tried to env->DeleteLocalRef and make local references count too big just to try, but no result:
// magic
jint ensureResult = env->EnsureLocalCapacity(1024);
debug("ensure result %i", ensureResult);
jint pushResult = env->PushLocalFrame(1024);
debug("push result %i", pushResult);
Clang is hijacking exception so i can't see the real reason.
The problem happens after few iterations as i said so it seems to be some limit exceeded problem or smth.
What is wrong?
UPDATE: I've done some research and i found that if i delete some local vars before then i can get one iteration more and one object instance more. So it makes me feel that it's using 16 local vars indeed and ignore my EnsureLocalCapacity invocation. Where should it be done?
Fixed using EnsureLocalCapacity in JNI_OnLoad() (did not work in each native method call).
objects created via NewObject, FindClass,needed to be freed via DeleteLocalRef(), since number of local variables are limited in jni. or you can use EnsureLocalCapacity in JNI_OnLoad().

Categories

Resources