This has got me stumped.
I've written an ant task which sets a logging level via an attribute (logLevel="INFO"). The setter is implemented like this
public void setLogLevel(String logLevel) {
System.out.println("Log level passed to ant task: " + logLevel);
this.level = Level.toLevel(logLevel);
System.out.println("Log level set to " + level.toString());
}
When I tested the task this setter never executed, even though the attribute was correctly spelled and set. After a lot of hair pulling I decided to try something that shouldn't matter; I moved the logLevel attribute ahead of my other attributes (it was next to last). Guess what - that change caused to setter to execute.
I changed the attribute back and forth several times to make sure this made a difference, it does. If the attribute is one of the first ones encountered, the setter executes and the attribute is set. If it's one of the last encountered, the setter does not execute.
I've seen this behavior in both Ant 1.7.1 and 1.9.0. Can anyone tell me why this strange behavior is happening and what I might be doing wrong? My task has 15 attributes and the logLevel attribute is not set when it is the 11th attribute or lower.
Per Martin Clayton here is the xml fragment from the build.xml file. The logLevel attribute is set here but if I move it down a few lines it will not be set.
<testReport report="${report}/report.xml"
logLevel="${logLevel}"
highestSeverityCountProperty="highestCount"
highSeverityCountProperty="highCount"
mediumSeverityCountProperty="mediumCount"
lowSeverityCountProperty="lowCount"
lowestSeverityCountProperty="lowestCount"
totalViolationsCountProperty="totalCount"
failOnHighestSeverityCount="${lvl1ViolationsFailValue}"
failOnHighSeverityCount="${lvl2ViolationsFailValue}"
failOnMeidumSeverityCount="${lvl3ViolationsFailValue}"
failOnLowSeverityCount="${lvl4ViolationsFailValue}"
failOnLowestSeverityCount="${lvl5ViolationsFailValue}"
failOnTotalViolationsCount="${totalViolationsFailValue}"
failureReason="failMessage"/>
The problem I'm having is that not all attributes get set. This problem does not happen in an ant build from a command line or eclipse. It happens during ant builds using the IBM Rational Team Concert Jazz Build Engine.
I don't know what the problem is but I have found a workaround using dynamic ant tasks without setters. See here for a simple description. This worked for me.
Related
picoCLI's #-file mechanism is almost what I need, but not exactly. The reason is that I want to control the exact location of additional files parsed -- depending on previous option values.
Example: When called with the options
srcfolder=/a/b optionfile=of.txt, my program should see the additional options read from /a/b/of.txt, but when called with srcfolder=../c optionfile=of.txt, it should see those from ../c/of.txt.
The #-file mechanism can't do that, because it expands ALL the option files (always relative to the current folder, if they're relative) prior to processing ANY option values.
So I'd like to have picoCLI...
process options "from left to right",
recursively parse an option file when it's mentioned in an optionfile option,
and after that continue with the following options.
I might be able to solve this by recursively starting to parse from within the annotated setter method:
...
Config cfg = new Config();
CommandLine cmd = new CommandLine(cfg);
cmd.parseArgs(a);
...
public class Config {
#Option(names="srcfolder")
public void setSrcfolder(String path) {
this.srcfolder=path;
}
#Option(names="optionfile")
public void parseOptionFile(String pathAndName) {
// validate path, do some other housekeeping...
CommandLine cmd = new CommandLine(this /* same Config instance! */ );
cmd.parseArgs(new String[] { "#"+this.srcfolder + pathAndName });
}
...
This way several CommandLine instances would call setter methods on the same Config instance, recursively "interrupting" each other. Now comes the actual question: Is that a problem?
Of course my Config class has state. But do CommandLine instances also have state that might get messed up if other CommandLine instances also modify cfg "in between options"?
Thanks for any insights!
Edited to add: I tried, and I'm getting an UnmatchedArgumentException on the #-file option:
Exception in thread "main" picocli.CommandLine$UnmatchedArgumentException: Unmatched argument at index 0: '#/path/to/configfile'
at picocli.CommandLine$Interpreter.validateConstraints(CommandLine.java:13490)
...
So first I have to get around this: Obviously picoCLI doesn't expand the #-file option unless it's coming directly from the command line.
I did get it to work: several CommandLine instance can indeed work on the same instance of an annotated class, without interfering with each other.
There are some catches and I had to work around a strange picoCLI quirk, but that's not exactly part of an answer to this question, so I explain them in this other question.
We're returning a #Result from a Struts 2 action in a Spring Boot app that specifies a location containing a relative path, in order to reference a jsp in a sibling directory to the root of the rest of the application.
#Result(name = "foo", location = "../../cat/bar.jsp")
This works in Tomcat 7.0.78, arriving in the StrictHttpFirewall.getFirewalledRequest as:
ApplicationHttpRequest.requestURI = "rootParent/cat/bar.jsp"
In Tomcat 7.0.79+, this flattening is no longer happening, and when the request reaches StrictHttpFirewall to check for url normalization, it blows up because it arrives as:
ApplicationHttpRequest.requestURI = "rootParent/root/WEB-INF/../../cat/bar.jsp"
I've scoured the Apache 7 changelog and security fixes to see if there is anything that might cause this to occur, but can find nothing. I've tried tweaking the useRelativeRedirects Context property, but it doesn't seem to have any effect. Pulling my hair out stepping through filter chains in debug. Any help would be much appreciated!
All those relative paths often show up in security vulnerability tools, so they aren't a great idea often, and you probably should stop using them.
That said if you are as stubborn as a mule, you can still use them, just not within the annotation itself, but you could use static variables to achieve the same thing, given if those files exist as resources somewhere:
FooBar.class.getClassLoader().getResource('../../cat/bar.jsp')
I figured out the problem and found a solution.
The problem is that this commit changes the behavior of org.apache.catalina.core.ApplicationContext.getRequestDispatcher between 7.0.78 and 7.0.79. Previously, the provided URL was normalized before being appended to the context path. In 7.0.79 and beyond, the normalized version is no longer being used.
There are no settings that change this behavior, but I figured out how to resolve the issue by modifying the location Struts passes in. By the time the request reaches ApplicationContext, struts already has combined the context and url into a single location, but we can override the context portion in the Request annotation. It uses "WEB-INF/content/" + "%{url}" for everything else, so I changed it to just "%{url}".
Before, results in "WEB-INF/content/../../bar.jsp"
#Result(name = "foo", location = "../../bar.jsp")
After, results in "bar.jsp"
#Result(name = "foo", location = "bar.jsp", params = {"location", "%{url}"})
First, some background why I want this crazy thing. I'm building a Plugin in Jenkins that provides an API for scripts that are started from a pipeline-script to independently communicate with jenkins.
For example a shell-script can then tell jenkins to start a new stage from the running script.
I've got the communication between the script and Jenkins working, but the problem is that I now want to try and start a stage from a callback in my code but I can't seem to figure out how to do it.
Stuff I've tried and failed at:
Start a new StageStep.java
I can't seem to find a way to correctly instantiate and inject the step into the lifecycle. I've looked into DSL.java, but cant seem to get to an instance to call invokeStep(), nor was I able to find out how to instantiate DSL.java with the right environment.
Look at StageStepExecution.java and do what it does.
It seems to either invoke the body with an Environment Variable and nothing else, or set some actions and save the state in a config file when it has no body. I could not find out how the Pipeline: Stage View Plugin hooks into this, but it doesn't seem to read the config file. I've tried setting the Actions (even the inner class through reflection) but that did not seem to do anything.
Inject a custom string as Groovy body and call it with csc.newBodyInvoker()
A hacky solution I came up with was just generating the groovy script and running it like the ParallelStep does. But the sandbox does not allow me to call new GroovyShell().evaluate(""), and If I approve that call, the 'stage' step throws a MissingMethodException. So I also do not instatiate the script with the right environment. Providing the EnvironmentExpander does not make any difference.
Referencing and modifying workflow/{n}.xml
Changing the name of a stage in the relevant workflow/{n}.xml and rebooting the server updates the name of the stage, but modifying my custom stage to look like a regular one does not seem to add the step as a stage.
Stuff I've researched:
If some other plugin does something like this, but I couldn't find any example of plugins starting other steps.
How Jenkins handles the scripts and starts the steps, but It seems as though every step is directly called through the method name after the script is parsed, and I found no way to hook into this.
Other plugins using the StageView through other methods, but I could not find any.
add an AtomNode as a head onto the running thread, but I couldn't find how to replace/add the head and am hesitant to mess with jenkins' threading.
I've spent multiple days on this seemingly trivial call, but I can't seem to figure it out.
So the latest thing I tried actually worked, and is displayed correctly, but it ain't pretty.
I basically reimplemented the implementation of DSL.invokeStep(), which required me to use reflection A LOT. This is not safe, and will break with any changes of course so I'll open an issue in the Jenkins' ticket system in the hopes they will add a public interface for doing this. I'm just hoping this won't give me any weird side-effects.
// First, get some environment stuff
CpsThread cpsThread = CpsThread.current();
CpsFlowExecution currentFlowExecution = (CpsFlowExecution) getContext().get(FlowExecution.class);
// instantiate the stage's descriptor
StageStep.DescriptorImpl stageStepDescriptor = new StageStep.DescriptorImpl();
// now we need to put a new FlowNode as the head of the step-stack. This is of course not possible directly,
// but everything is also outside of the sandbox, so putting the class in the same package doesn't work
// get the 'head' field
Field cpsHeadField = CpsThread.class.getDeclaredField("head");
cpsHeadField.setAccessible(true);
Object headValue = cpsHeadField.get(cpsThread);
// get it's value
Method head_get = headValue.getClass().getDeclaredMethod("get");
head_get.setAccessible(true);
FlowNode currentHead = (FlowNode) head_get.invoke(headValue);
// crate a new StepAtomNode starting at the current value of 'head'.
FlowNode an = new StepAtomNode(currentFlowExecution, stageStepDescriptor, currentHead);
// now set this as the new head.
Method head_setNewHead = headValue.getClass().getDeclaredMethod("setNewHead", FlowNode.class);
head_setNewHead.setAccessible(true);
head_setNewHead.invoke(headValue, an);
// Create a new CpsStepContext, and as the constructor is protected, use reflection again
Constructor<?> declaredConstructor = CpsStepContext.class.getDeclaredConstructors()[0];
declaredConstructor.setAccessible(true);
CpsStepContext context = (CpsStepContext) declaredConstructor.newInstance(stageStepDescriptor,cpsThread,currentFlowExecution.getOwner(),an,null);
stageStepDescriptor.checkContextAvailability(context); // Good to check stuff I guess
// Create a new instance of the step, passing in arguments as a Map
Map<String, Object> stageArguments = new HashMap<>();
stageArguments.put("name", "mynutest");
Step stageStep = stageStepDescriptor.newInstance(stageArguments);
// so start the damd thing
StepExecution execution = stageStep.start(context);
// now that we have a callable instance, we set the step on the Cps Thread. Reflection to the rescue
Method mSetStep = cpsThread.getClass().getDeclaredMethod("setStep", StepExecution.class);
mSetStep.setAccessible(true);
mSetStep.invoke(cpsThread, execution);
// Finally. Start running the step
execution.start();
I am working on a web-application that uses Spring MVC.
It has been working fine on Glassfish 3.0.1, but when migrating to Glassfish 3.1, it started acting strange. Some pages are only partially showing, or showing nothing at all, and in the log, a lot of messages of this type:
[#|2012-08-30T11:50:17.582+0200|WARNING|glassfish3.1|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=69;_ThreadName=Thread-1;|StandardWrapperValve[SpringServlet]: PWC1406: Servlet.service() for servlet SpringServlet threw exception
org.springframework.beans.NotReadablePropertyException: Invalid property 'something' of bean class [com.something.Something]: Bean property 'something' is not readable or has an invalid getter method: Does the return type of the getter match the parameter type of the setter?
at org.springframework.beans.BeanWrapperImpl.getPropertyValue(BeanWrapperImpl.java:729)
at org.springframework.beans.BeanWrapperImpl.getNestedBeanWrapper(BeanWrapperImpl.java:576)
at org.springframework.beans.BeanWrapperImpl.getBeanWrapperForPropertyPath(BeanWrapperImpl.java:553)
at org.springframework.beans.BeanWrapperImpl.getPropertyValue(BeanWrapperImpl.java:719)
at org.springframework.validation.AbstractPropertyBindingResult.getActualFieldValue(AbstractPropertyBindingResult.java:99)
at org.springframework.validation.AbstractBindingResult.getFieldValue(AbstractBindingResult.java:226)
at org.springframework.web.servlet.support.BindStatus.<init>(BindStatus.java:120)
at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.getBindStatus(AbstractDataBoundFormElementTag.java:178)
at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.getPropertyPath(AbstractDataBoundFormElementTag.java:198)
at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.getName(AbstractDataBoundFormElementTag.java:164)
at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.writeDefaultAttributes(AbstractDataBoundFormElementTag.java:127)
at org.springframework.web.servlet.tags.form.AbstractHtmlElementTag.writeDefaultAttributes(AbstractHtmlElementTag.java:421)
at org.springframework.web.servlet.tags.form.TextareaTag.writeTagContent(TextareaTag.java:95)
at org.springframework.web.servlet.tags.form.AbstractFormTag.doStartTagInternal(AbstractFormTag.java:102)
at org.springframework.web.servlet.tags.RequestContextAwareTag.doStartTag(RequestContextAwareTag.java:79)
The error message isn't incorrect, because the property in question does not have a setter-method (gets its value through the constructor). But like I said, this has not been a problem when using Glassfish 3.0.1, only when using it on the new server with Glassfish 3.1.
Does anyone know if there is something in the Glassfish version that might cause this? Or is it some kind of configuration that is missing on the new server?
Some code:
Controller:
#ModelAttribute
public SomethingContainer retriveSomethingContainer(#PathVariable final long id {
return somethingContainerDao.retrieveSomethingContainer(id);
}
#InitBinder("somethingContainer")
public void initBinderForSomething(final WebDataBinder binder) {
binder.setAllowedFields(new String[] {
"something.title",
"something.description",
});
}
SomethingContainer:
#Embedded
private final Something something = new Something();
public Something getSomething() {
return something;
}
//no setter
public String getDescription() {
return something.getDescription();
}
Update:
Restarting Glassfish actually removes the problem - temporarily. I suspect that it might have something to do with the loading of the custom binders, we had some problems with out of memory errors, which I thought had something to do with it, but that has been fixed without fixing this problem.
Update 2:
On the 3.0.1 server, the one of the jvm arguments was -client. On the 3.1-server, it was -server. We changed it to -client, and this made the frequency of the error go down a lot, it was happening every other day with -server, took 2 weeks for it to happen with -client.
Update 3:
Some information about the servers (more can be added if requested..)
Server1 (the working one):
Windows Server 2003
Java jdk 6 build 35
Glassfish 3.0.1 build 22
-xmx 1024m
Server2 (the one with problems):
Windows Server 2008 64-bit
Java jdk 6 build 31
Glassfish 3.1 build 43
-xmx 1088m
-xms 1088m
We are using Spring version 3.1.0.
Update 4:
I recreated the error by renaming a field in a jsp to something that does not exist in the modelattribute.
But, more importantly, I noticed something: The fields where the system can't find the getters are often fields of superclasses of the ones that are referenced in the modelattribute. To continue my example, the SomthingContainer is really like this:
public class SuperSomethingContainer {
[...]
private Something something;
public Something getSomething() {
return something;
}
}
public class SomethingContainer extends SuperSomethingContainer {
[...]
}
The reference in the controller stays as is, so it's referencing a field that is in the superclass of the object in question.
Update 5:
I tried connecting to the production server with a debugger after the error occured. I put a breakpoint on the return statement of a controller-method returning the object with the error, and tried to see if I could access the field with problems at the time. And that I could, so the problem must lie within Spring MVC/the generated jsp-classes.
(Also, the field in error was of the type "someobject.something[0].somethingelse[0]", but when the somethingelse-list was empty, there was no error! To me, this implies that it somehow can't find the get-method of a list(?))
Update 6:
It seems that the problem has to do with the generation of Java-classes from the jsps. We have not used precompile jsps when deploying, so they are compiled when first used. The problem occurs the first time a page is visited, and the jsp compiled. I also noticed that once the problem has occured, jsps that are compiled after will all give errors. I've kept a few of the problem generated java files, and upon the next restart I will compare them to the working ones. Getting closer :)
Update 7:
Compared the compiled jsp java files that resulted in an error with ones that did not, and there was no difference. So that kinda leaves that out.
So, I now know that the Java object leaving the controller is fine (checked with debugger), and the java class generated from the jsp is fine. So it must be something in between, now I need to find out what...
Update 8:
Another round of debugging, and narrowed the problem down some more. It turns out that spring does some caching of the properties belonging to the various classes. In org.springframework.beans.BeanWrapperImpl, method getPropertyValue, there is the following:
private Object getPropertyValue(PropertyTokenHolder tokens) throws BeansException {
String propertyName = tokens.canonicalName;
String actualName = tokens.actualName;
PropertyDescriptor pd = getCachedIntrospectionResults().getPropertyDescriptor(actualName);
if (pd == null || pd.getReadMethod() == null) {
throw new NotReadablePropertyException(getRootClass(), this.nestedPath + propertyName);
}
The problem is that the cachedIntrospectionResults does not contain the property in question, it contains every other property of the class though. Will need to dig some more to try to find out why it is missing, if it's missing from the start or if it gets lost somewhere along the line.
Also, I've noticed that the missing properties are those that do not have setters, only getters. And, it seems to be context aware, as indicated by the stacktrace. So not finding a property when visiting one page does not mean that its not available when visiting another.
Update 9:
Another day, more debugging. Actually found some good stuff. The getCachedIntrospectionResults() call in the previous code block wounded up calling CachedIntrospectionResults#forClass(theClassInQuestion). This returned a CachedIntrospectionResults object, containing far from all of the properties expected (11 of 21). Going into the forClass-method, I found:
static CachedIntrospectionResults forClass(Class beanClass) throws BeansException {
CachedIntrospectionResults results;
Object value = classCache.get(beanClass);
if (value instanceof Reference) {
Reference ref = (Reference) value;
results = (CachedIntrospectionResults) ref.get();
}
else {
results = (CachedIntrospectionResults) value;
}
if (results == null) {
//build the CachedIntrospectionResults, store it in classCache and return it.
It turned out that the CachedIntrospectionResults returned was found by classCache.get(beanClass). So what was stored in the classCache was corrupted/did not contain all that it should. I put a breakpoint on the classCache.get(beanClass)-line, and tried running this through the debugger:
classCache.put(beanClass, null);
When allowing the method to finish, and rebuild the CachedIntrospectionResults, things started working again. So, what is being stored in the classCache is out of sync with what would and should be created if it was allowed to rebuild it. Whether this is due to something going wrong the first time it is built, or if the classCache is corrupted somewhere along the line I do not currently know.
I'm starting to suspect that this has something to do with classloaders, as I've previously experienced problems due to changes in the way the classloader works when updating Glassfish..
There may be more than one possible reason. I am not sure about the actual but I can give you the way to find out the problem
Step 1: on server 2 machine deploy application on Glassfish 3.0.1 build 22 , now if it works fine on the server 2 machine that it means there might be problem with the libraries of Glass fish, following can be reason for this problem
Any library that is missing in the Glassfish 3.1 build 43 that is in Glassfish 3.0.1 build 22. you can solve by copying all libraries from working Glassfish server to new server.
My be the libraries of Glassfish is conflicting with spring version. [Similliar kind of problem I have faced on tomcat and when i replaced my spring libraires from 3.0.1 to 3.0.3 it worked for me] , so replace your spring libraries with latest one.
Step 2: and if the result of step1 is that application is not running on server 2 machin on Glassfish 3.0.1 build 22 there may be following reason
if any libraries that you have pasted on java lib either not included in this server machine or having different versions.
Any folders that are set on classpath or using any environment variables on server 1, either does not exist on server 2 or don't have the jars or having jars with diff versions
I got a colleague of mine investigate the error, and he was able to recreate it in a unit test. This was done by invoking the method that builds CachedIntroSpectionResults for a class, while at the same time stressing the jvm by adding strings to the memory, with very low memory settings. This approach made it fail 20/30000 times.
As to the cause of it, I only got an oral explanation, so I don't have all the details, but it was something like this: Java has its own introspection-results, and these are wrapped by Spring. The problem is that the java-results utilize soft references, which make them prone to garbage collections. So, when Spring was building its wrappers around these soft references at the exact same time that the garbage collector ran, it actually cleared some of the basis of what Spring was using, leading to properties being "lost".
The solution seems to be upgrading from Spring 3.1.0.RELEASE to Spring 3.1.3.RELEASE. Here, there are some changes, and Spring no longer wraps soft references when determining the properties of a class (soft reference are used in rare, special cases, instead of all the time). After upgrading the version of Spring, the error has not been reproducable through the unit test, it remains to see if this is the case through in practice use.
Update: It's been a few weeks, an no sign of the error. So updating Spring version worked :)
I think I've actually found a candidate for the cause of this.
After getting the error on one of the test-servers after a very short duration and little use, we did some additional checks on the cause. It turned out that the test-server had just half the available memory, which turned us into looking at it a bit more thoroughly. It turned out that it hadn't used up all its memory, but when using JConsole to investigate the memory usage of the different part of the new generation space on the heap, it turned out that one of the surivior spaces was packed full. I'm guessing that this made parts of it overflow, leading to the overflowed parts to be GC-ed or unreachable by not being where it was supposed to.
We have yet to verify that this is in fact the problem in the production environment as well, but once the error turns up again we will check, and if it is the case we will change some memory settings to allow more space for survival areas of the new generation heap. (-XX:SurvivorRatio=6 or something like that).
So it seems that larger Spring MVC applications has a need for a large survivor space, specially in newer versions of Glassfish.
Indeed, there had been an issue with the new introduced ExtendedBeanInfo class in Spring 3.1.0, which had been fixed in Spring 3.1.1 - see (https://jira.spring.io/browse/SPR-8347).
My requirement is simple. At the beginning of each file there should be a block comment like this:
/*
* This file was last modified by {username} at {date} and has revision number {revisionnumber}
*/
I want to populate the {username}, {date} and {revisionnumber} with the appropriate content from SVN.
How can I achieve this with NetBeans and Subversion? I have searched a lot but I can't find exactly what I need.
I looked at this question and got some useful information. It is not exactly duplicate because I am working with NetBeans but the idea is the same. This is my header:
/*
* $LastChangedDate$
* $LastChangedRevision$
*/
Then I go to Team > Subversion > Svn properties and add svn:keywords as property name and LastChangedDate LastChangedRevision as property value.
And when I commit from NetBeans it looks like this:
/*
* $LastChangedDate: 2012-02-13 17:38:57 +0200 (Пн, 13 II 2012) $
* $LastChangedRevision: 27 $
*/
Thanks all for the support! I will accept my answer because other answers do not include the NetBeans information. Nevertheless I give +1 to the other answers.
As this data only exists after the file was committed it should be set by SVN itself, not a client program. (And client-side processing tends to get disabled or not configured at all.) This means there is no simple template/substitute like you want, because then after the first replacement the template variables would be lost.
You can find information abut SVN's keyword substitution here. Then things like $Rev$ can be replaced by $Rev: 12 $.
You can do this with The SubWCRev Program.
SubWCRev is Windows console program which can be used to read the
status of a Subversion working copy and optionally perform keyword
substitution in a template file. This is often used as part of the
build process as a means of incorporating working copy information
into the object you are building. Typically it might be used to
include the revision number in an “About” box.
This is typically done during the build process.
If you use Linux, you can find a Linux binary here. If you wish, you could also write your own using the output of svn log.
I followed Petar Minchev's suggestions, only I put the $LastChangedRevision$ tag not in a comment block but embedded it in a string. Now it is available to programmatically display the revision number in a Help -> About dialog.
String build = "$LastChangedRevision$";
I can later display the revision value in the about dialog using a String that has all of the fluff trimmed off.
String version = build.replace("$LastChangedRevision:", "").replace("$", "").trim();
I recommend a slightly different approach.
Put the following header at the top of your source files.
/*
* This file was last modified by {username} at {date} and has revision number {revisionnumber}
*/
Then add a shell script like this
post update, checkout script
USERNAME=# // use svnversion to get username
DATE=# // use svnversion to get revisio nnumber
sed -e "s#{username}#${USERNAME}#" -e "s#{date}#${DATE}#" ${SOURCE_CONTROL_FILE} > ${SOURCE_FILE}
pre commit script
cat standard_header.txt > ${SOURCE_CONTROL_FILE}
tail --lines $((${LENGTH}-4)) ${SOURCE_FILE} >> ${SOURCE_CONTROL_FILE}