How to specify random group position, instead of sequence, in Regular Expressions? - java

I’m trying to use the available Wiktionary data dump downloads, which I have been translating with Java and its regex classes, specifically Pattern and Matcher, with fair success.
The word definition dumps, which are my main interest, are in raw wiki-markup, which is not html nor xml, etc., but its own unique format. There are many different elements, but the most difficult to deal with are the templates.
What I’ve come up against are specific templates that have positional fields, as well as optional ones, which can appear in any order. I have been able to come up with regular expressions, which almost do the job, but are not quite adequate to handle every instance I encounter, where fields are switched around or optionally omitted.
I am realizing from this, that I do not know how to designate regex group position when order of occurrence is more sophisticated than just a sequence.
An example of one of these complicated templates is that of “term”, documented on the following page: http://en.wiktionary.org/wiki/Template:term
My best stab at the regex (omitting, for now, the extra escape characters needed to make the string Java compatible) is the following:
\{\{term\|(.+?)(?:\|(.*?))?(?:\|([\w, -]+?))?(?:\|lang=([\w-]+?))?(?:\|sc=(\w+?))?(?:\|tr=([\w, -]+?))?(?:\|pos=(\w+?))?(?:\|lit=([\w, -]+))?\}\}
This works for genuine examples of term templates encountered such as:
{{term|λόγος|logos|word|lang=grc}}
{{term|verbum|verbō|for the word|lang=la}}
{{term|*bʰer-||to carry|lang=ine-pro}}
{{term|alternative lifestyle|lang=en}}
{{term|שוין||already|lang=yi|tr=shoyn}}
{{term|Bögge||goblin, snot|lang=nds}}
{{term|as}}
But it fails to work properly for the following:
{{term|deus ex māchinā||device|pos=n|lit=god from a device|lang=la}}
{{term|ри̏ба||fish|tr=rȉba|sc=Cyrl|lang=sh}}
{{term|שוין|lang=yi|tr=shoyn}}
{{term|lang=en|vocational}}
There’s got to be a way of specifying that some groups are positional and some can appear randomly, instead of just optionally in a specific sequence. This should be, for instance, a common issue when processing many HTML elements. I would very much appreciate any advice on how to write the regex to deal with this positional sophistication. Thanks so much! – Jeff.

Your regex matches every line, according to RegexBuddy, Java flavor, although I don't understand if it's capturing exactly what you want.
It is extremely slow, however, as debuggex has been churning away on it for about ten minutes now, still with no response. This despite a pretty small set of input.
...Finally:
^\{\{term\|(.+?)(?:\|(.*?))?(?:\|([\w, -]+?))?(?:\|lang=([\w-]+?))?(?:\|sc=(\w+?))?(?:\|tr=([\w, -]+?))?(?:\|pos=(\w+?))?(?:\|lit=([\w, -]+))?\}\}$
Debuggex Demo
It's actually not working on Debuggex. For some reason it's not anchoring to the line start and ends, despite the m flag and the ^ and $ which I've added. They work okay in RegexBuddy.
I'm thinking that this is just not a good problem for regex. Not for a reasonable single regex. Splitting each line on the | is a way better way of handling this problem.
In addition to discouraging you against using regex, I'm also letting you know about the Stack Overflow Regular Expressions FAQ :)

Related

how to create Regex to start with specific multi line tags XML java [duplicate]

There is no day on SO that passes without a question about parsing (X)HTML or XML with regular expressions being asked.
While it's relatively easy to come up with examples that demonstrates the non-viability of regexes for this task or with a collection of expressions to represent the concept, I could still not find on SO a formal explanation of why this is not possible done in layman's terms.
The only formal explanations I could find so far on this site are probably extremely accurate, but also quite cryptic to the self-taught programmer:
the flaw here is that HTML is a Chomsky Type 2 grammar (context free
grammar) and RegEx is a Chomsky Type 3 grammar (regular expression)
or:
Regular expressions can only match regular languages but HTML is a
context-free language.
or:
A finite automaton (which is the data structure underlying a regular
expression) does not have memory apart from the state it's in, and if
you have arbitrarily deep nesting, you need an arbitrarily large
automaton, which collides with the notion of a finite automaton.
or:
The Pumping lemma for regular languages is the reason why you can't do
that.
[To be fair: the majority of the above explanation link to wikipedia pages, but these are not much easier to understand than the answers themselves].
So my question is: could somebody please provide a translation in layman's terms of the formal explanations given above of why it is not possible to use regex for parsing (X)HTML/XML?
EDIT: After reading the first answer I thought that I should clarify: I am looking for a "translation" that also briefely explains the concepts it tries to translate: at the end of an answer, the reader should have a rough idea - for example - of what "regular language" and "context-free grammar" mean...
Concentrate on this one:
A finite automaton (which is the data structure underlying a regular
expression) does not have memory apart from the state it's in, and if
you have arbitrarily deep nesting, you need an arbitrarily large
automaton, which collides with the notion of a finite automaton.
The definition of regular expressions is equivalent to the fact that a test of whether a string matches the pattern can be performed by a finite automaton (one different automaton for each pattern). A finite automaton has no memory - no stack, no heap, no infinite tape to scribble on. All it has is a finite number of internal states, each of which can read a unit of input from the string being tested, and use that to decide which state to move to next. As special cases, it has two termination states: "yes, that matched", and "no, that didn't match".
HTML, on the other hand, has structures that can nest arbitrarily deep. To determine whether a file is valid HTML or not, you need to check that all the closing tags match a previous opening tag. To understand it, you need to know which element is being closed. Without any means to "remember" what opening tags you've seen, no chance.
Note however that most "regex" libraries actually permit more than just the strict definition of regular expressions. If they can match back-references, then they've gone beyond a regular language. So the reason why you shouldn't use a regex library on HTML is a little more complex than the simple fact that HTML is not regular.
The fact that HTML doesn't represent a regular language is a red herring. Regular expression and regular languages sound sort of similar, but are not - they do share the same origin, but there's a notable distance between the academic "regular languages" and the current matching power of engines. In fact, almost all modern regular expression engines support non-regular features - a simple example is (.*)\1. which uses backreferencing to match a repeated sequence of characters - for example 123123, or bonbon. Matching of recursive/balanced structures make these even more fun.
Wikipedia puts this nicely, in a quote by Larry Wall:
'Regular expressions' [...] are only marginally related to real regular expressions. Nevertheless, the term has grown with the capabilities of our pattern matching engines, so I'm not going to try to fight linguistic necessity here. I will, however, generally call them "regexes" (or "regexen", when I'm in an Anglo-Saxon mood).
"Regular expression can only match regular languages", as you can see, is nothing more than a commonly stated fallacy.
So, why not then?
A good reason not to match HTML with regular expression is that "just because you can doesn't mean you should". While may be possible - there are simply better tools for the job. Considering:
Valid HTML is harder/more complex than you may think.
There are many types of "valid" HTML - what is valid in HTML, for example, isn't valid in XHTML.
Much of the free-form HTML found on the internet is not valid anyway. HTML libraries do a good job of dealing with these as well, and were tested for many of these common cases.
Very often it is impossible to match a part of the data without parsing it as a whole. For example, you might be looking for all titles, and end up matching inside a comment or a string literal. <h1>.*?</h1> may be a bold attempt at finding the main title, but it might find:
<!-- <h1>not the title!</h1> -->
Or even:
<script>
var s = "Certainly <h1>not the title!</h1>";
</script>
Last point is the most important:
Using a dedicated HTML parser is better than any regex you can come up with. Very often, XPath allows a better expressive way of finding the data you need, and using an HTML parser is much easier than most people realize.
A good summary of the subject, and an important comment on when mixing Regex and HTML may be appropriate, can be found in Jeff Atwood's blog: Parsing Html The Cthulhu Way.
When is it better to use a regular expression to parse HTML?
In most cases, it is better to use XPath on the DOM structure a library can give you. Still, against popular opinion, there are a few cases when I would strongly recommend using a regex and not a parser library:
Given a few of these conditions:
When you need a one-time update of your HTML files, and you know the structure is consistent.
When you have a very small snippet of HTML.
When you aren't dealing with an HTML file, but a similar templating engine (it can be very hard to find a parser in that case).
When you want to change parts of the HTML, but not all of it - a parser, to my knowledge, cannot answer this request: it will parse the whole document, and save a whole document, changing parts you never wanted to change.
Because HTML can have unlimited nesting of <tags><inside><tags and="<things><that><look></like></tags>"></inside></each></other> and regex can't really cope with that because it can't track a history of what it's descended into and come out of.
A simple construct that illustrates the difficulty:
<body><div id="foo">Hi there! <div id="bar">Bye!</div></div></body>
99.9% of generalized regex-based extraction routines will be unable to correctly give me everything inside the div with the ID foo, because they can't tell the closing tag for that div from the closing tag for the bar div. That is because they have no way of saying "okay, I've now descended into the second of two divs, so the next div close I see brings me back out one, and the one after that is the close tag for the first". Programmers typically respond by devising special-case regexes for the specific situation, which then break as soon as more tags are introduced inside foo and have to be unsnarled at tremendous cost in time and frustration. This is why people get mad about the whole thing.
A regular language is a language that can be matched by a finite state machine.
(Understanding Finite State machines, Push-down machines, and Turing machines is basically the curriculum of a fourth year college CS Course.)
Consider the following machine, which recognizes the string "hi".
(Start) --Read h-->(A)--Read i-->(Succeed)
\ \
\ -- read any other value-->(Fail)
-- read any other value-->(Fail)
This is a simple machine to recognize a regular language; Each expression in parenthesis is a state, and each arrow is a transition. Building a machine like this will allow you to test any input string against a regular language -- hence, a regular expression.
HTML requires you to know more than just what state you are in -- it requires a history of what you have seen before, to match tag nesting. You can accomplish this if you add a stack to the machine, but then it is no longer "regular". This is called a Push-down machine, and recognizes a grammar.
A regular expression is a machine with a finite (and typically rather small) number of discrete states.
To parse XML, C, or any other language with arbitrary nesting of language elements, you need to remember how deep you are. That is, you must be able to count braces/brackets/tags.
You cannot count with finite memory. There may be more brace levels than you have states! You might be able to parse a subset of your language that restricts the number of nesting levels, but it would be very tedious.
A grammar is a formal definition of where words can go. For example, adjectives preceed nouns in English grammar, but follow nouns en la gramática española.
Context-free means that the grammar works universally in all contexts. Context-sensitive means there are additional rules in certain contexts.
In C#, for example, using means something different in using System; at the top of files, than using (var sw = new StringWriter (...)). A more relevant example is the following code within code:
void Start ()
{
string myCode = #"
void Start()
{
Console.WriteLine (""x"");
}
";
}
There's another practical reason for not using regular expressions to parse XML and HTML that has nothing to do with the computer science theory at all: your regular expression will either be hideously complicated, or it will be wrong.
For example, it's all very well writing a regular expression to match
<price>10.65</price>
But if your code is to be correct, then:
It must allow whitespace after the element name in both start and end tag
If the document is in a namespace, then it should allow any namespace prefix to be used
It should probably allow and ignore any unknown attributes appearing in the start tag (depending on the semantics of the particular vocabulary)
It may need to allow whitespace before and after the decimal value (again, depending on the detailed rules of the particular XML vocabulary).
It should not match something that looks like an element, but is actually in a comment or CDATA section (this becomes especially important if there is a possibility of malicious data trying to fool your parser).
It may need to provide diagnostics if the input is invalid.
Of course some of this depends on the quality standards you are applying. We see a lot of problems on StackOverflow with people having to generate XML in a particular way (for example, with no whitespace in the tags) because it is being read by an application that requires it to be written in a particular way. If your code has any kind of longevity then it's important that it should be able to process incoming XML written in any way that the XML standard permits, and not just the one sample input document that you are testing your code on.
So others have gone and given brief definitions for most of these things, but I don't really think they cover WHY normal regex's are what they are.
There are some great resources on what a finite state machine is, but in short, a seminal paper in computer science proved that the basic grammar of regex's (the standard ones, used by grep, not the extended ones, like PCRE) can always be manipulated into a finite-state machine, meaning a 'machine' where you are always in a box, and have a limited number of ways to move to the next box. In short, you can always tell what the next 'thing' you need to do is just by looking at the current character. (And yes, even when it comes to things like 'match at least 4, but no more than 5 times', you can still create a machine like this) (I should note that note that the machine I describe here is technically only a subtype of finite-state machines, but it can implement any other subtype, so...)
This is great because you can always very efficiently evaluate such a machine, even for large inputs. Studying these sorts of questions (how does my algorithm behave when the number of things I feed it gets big) is called studying the computational complexity of the technique. If you're familiar with how a lot of calculus deals with how functions behave as they approach infinity, well, that's pretty much it.
So whats so great about a standard regular expression? Well, any given regex can match a string of length N in no more than O(N) time (meaning that doubling the length of your input doubles the time it takes: it says nothing about the speed for a given input) (of course, some are faster: the regex * could match in O(1), meaning constant, time). The reason is simple: remember, because the system has only a few paths from each state, you never 'go back', and you only need to check each character once. That means even if I pass you a 100 gigabyte file, you'll still be able to crunch through it pretty quickly: which is great!.
Now, its pretty clear why you can't use such a machine to parse arbitrary XML: you can have infinite tags-in-tags, and to parse correctly you need an infinite number of states. But, if you allow recursive replaces, a PCRE is Turing complete: so it could totally parse HTML! Even if you don't, a PCRE can parse any context-free grammar, including XML. So the answer is "yeah, you can". Now, it might take exponential time (you can't use our neat finite-state machine, so you need to use a big fancy parser that can rewind, which means that a crafted expression will take centuries on a big file), but still. Possible.
But lets talk real quick about why that's an awful idea. First of all, while you'll see a ton of people saying "omg, regex's are so powerful", the reality is... they aren't. What they are is simple. The language is dead simple: you only need to know a few meta-characters and their meanings, and you can understand (eventually) anything written in it. However, the issue is that those meta-characters are all you have. See, they can do a lot, but they're meant to express fairly simple things concisely, not to try and describe a complicated process.
And XML sure is complicated. It's pretty easy to find examples in some of the other answers: you can't match stuff inside comment fields, ect. Representing all of that in a programming language takes work: and that's with the benefits of variables and functions! PCRE's, for all their features, can't come close to that. Any hand-made implementation will be buggy: scanning blobs of meta-characters to check matching parenthesis is hard, and it's not like you can comment your code. It'd be easier to define a meta-language, and compile that down to a regex: and at that point, you might as well just take the language you wrote your meta-compiler with and write an XML parser. It'd be easier for you, faster to run, and just better overall.
For more neat info on this, check out this site. It does a great job of explaining all this stuff in layman's terms.
Don't parse XML/HTML with regex, use a proper XML/HTML parser and a powerful xpath query.
theory :
According to the compiling theory, XML/HTML can't be parsed using regex based on finite state machine. Due to hierarchical construction of XML/HTML you need to use a pushdown automaton and manipulate LALR grammar using tool like YACC.
realLife©®™ everyday tool in a shell :
You can use one of the following :
xmllint often installed by default with libxml2, xpath1 (check my wrapper to have newlines delimited output
xmlstarlet can edit, select, transform... Not installed by default, xpath1
xpath installed via perl's module XML::XPath, xpath1
xidel xpath3
saxon-lint my own project, wrapper over #Michael Kay's Saxon-HE Java library, xpath3
or you can use high level languages and proper libs, I think of :
python's lxml (from lxml import etree)
perl's XML::LibXML, XML::XPath, XML::Twig::XPath, HTML::TreeBuilder::XPath
ruby nokogiri, check this example
php DOMXpath, check this example
Check: Using regular expressions with HTML tags
In a purely theoretical sense, it is impossible for regular expressions to parse XML. They are defined in a way that allows them no memory of any previous state, thus preventing the correct matching of an arbitrary tag, and they cannot penetrate to an arbitrary depth of nesting, since the nesting would need to be built into the regular expression.
Modern regex parsers, however, are built for their utility to the developer, rather than their adherence to a precise definition. As such, we have things like back-references and recursion that make use of knowledge of previous states. Using these, it is remarkably simple to create a regex that can explore, validate, or parse XML.
Consider for example,
(?:
<!\-\-[\S\s]*?\-\->
|
<([\w\-\.]+)[^>]*?
(?:
\/>
|
>
(?:
[^<]
|
(?R)
)*
<\/\1>
)
)
This will find the next properly formed XML tag or comment, and it will only find it if it's entire contents are properly formed. (This expression has been tested using Notepad++, which uses Boost C++'s regex library, which closely approximates PCRE.)
Here's how it works:
The first chunk matches a comment. It's necessary for this to come first so that it will deal with any commented-out code that otherwise might cause hang ups.
If that doesn't match, it will look for the beginning of a tag. Note that it uses parentheses to capture the name.
This tag will either end in a />, thus completing the tag, or it will end with a >, in which case it will continue by examining the tag's contents.
It will continue parsing until it reaches a <, at which point it will recurse back to the beginning of the expression, allowing it to deal with either a comment or a new tag.
It will continue through the loop until it arrives at either the end of the text or at a < that it cannot parse. Failing to match will, of course, cause it to start the process over. Otherwise, the < is presumably the beginning of the closing tag for this iteration. Using the back-reference inside a closing tag <\/\1>, it will match the opening tag for the current iteration (depth). There's only one capturing group, so this match is a simple matter. This makes it independent of the names of the tags used, although you could modify the capturing group to capture only specific tags, if you need to.
At this point it will either kick out of the current recursion, up to the next level or end with a match.
This example solves problems dealing with whitespace or identifying relevant content through the use of character groups that merely negate < or >, or in the case of the comments, by using [\S\s], which will match anything, including carriage returns and new lines, even in single-line mode, continuing until it reaches a
-->. Hence, it simply treats everything as valid until it reaches something meaningful.
For most purposes, a regex like this isn't particularly useful. It will validate that XML is properly formed, but that's all it will really do, and it doesn't account for properties (although this would be an easy addition). It's only this simple because it leaves out real world issues like this, as well as definitions of tag names. Fitting it for real use would make it much more of a beast. In general, a true XML parser would be far superior. This one is probably best suited for teaching how recursion works.
Long story short: use an XML parser for real work, and use this if you want to play around with regexes.

RegEx to replace namespace in XML [duplicate]

There is no day on SO that passes without a question about parsing (X)HTML or XML with regular expressions being asked.
While it's relatively easy to come up with examples that demonstrates the non-viability of regexes for this task or with a collection of expressions to represent the concept, I could still not find on SO a formal explanation of why this is not possible done in layman's terms.
The only formal explanations I could find so far on this site are probably extremely accurate, but also quite cryptic to the self-taught programmer:
the flaw here is that HTML is a Chomsky Type 2 grammar (context free
grammar) and RegEx is a Chomsky Type 3 grammar (regular expression)
or:
Regular expressions can only match regular languages but HTML is a
context-free language.
or:
A finite automaton (which is the data structure underlying a regular
expression) does not have memory apart from the state it's in, and if
you have arbitrarily deep nesting, you need an arbitrarily large
automaton, which collides with the notion of a finite automaton.
or:
The Pumping lemma for regular languages is the reason why you can't do
that.
[To be fair: the majority of the above explanation link to wikipedia pages, but these are not much easier to understand than the answers themselves].
So my question is: could somebody please provide a translation in layman's terms of the formal explanations given above of why it is not possible to use regex for parsing (X)HTML/XML?
EDIT: After reading the first answer I thought that I should clarify: I am looking for a "translation" that also briefely explains the concepts it tries to translate: at the end of an answer, the reader should have a rough idea - for example - of what "regular language" and "context-free grammar" mean...
Concentrate on this one:
A finite automaton (which is the data structure underlying a regular
expression) does not have memory apart from the state it's in, and if
you have arbitrarily deep nesting, you need an arbitrarily large
automaton, which collides with the notion of a finite automaton.
The definition of regular expressions is equivalent to the fact that a test of whether a string matches the pattern can be performed by a finite automaton (one different automaton for each pattern). A finite automaton has no memory - no stack, no heap, no infinite tape to scribble on. All it has is a finite number of internal states, each of which can read a unit of input from the string being tested, and use that to decide which state to move to next. As special cases, it has two termination states: "yes, that matched", and "no, that didn't match".
HTML, on the other hand, has structures that can nest arbitrarily deep. To determine whether a file is valid HTML or not, you need to check that all the closing tags match a previous opening tag. To understand it, you need to know which element is being closed. Without any means to "remember" what opening tags you've seen, no chance.
Note however that most "regex" libraries actually permit more than just the strict definition of regular expressions. If they can match back-references, then they've gone beyond a regular language. So the reason why you shouldn't use a regex library on HTML is a little more complex than the simple fact that HTML is not regular.
The fact that HTML doesn't represent a regular language is a red herring. Regular expression and regular languages sound sort of similar, but are not - they do share the same origin, but there's a notable distance between the academic "regular languages" and the current matching power of engines. In fact, almost all modern regular expression engines support non-regular features - a simple example is (.*)\1. which uses backreferencing to match a repeated sequence of characters - for example 123123, or bonbon. Matching of recursive/balanced structures make these even more fun.
Wikipedia puts this nicely, in a quote by Larry Wall:
'Regular expressions' [...] are only marginally related to real regular expressions. Nevertheless, the term has grown with the capabilities of our pattern matching engines, so I'm not going to try to fight linguistic necessity here. I will, however, generally call them "regexes" (or "regexen", when I'm in an Anglo-Saxon mood).
"Regular expression can only match regular languages", as you can see, is nothing more than a commonly stated fallacy.
So, why not then?
A good reason not to match HTML with regular expression is that "just because you can doesn't mean you should". While may be possible - there are simply better tools for the job. Considering:
Valid HTML is harder/more complex than you may think.
There are many types of "valid" HTML - what is valid in HTML, for example, isn't valid in XHTML.
Much of the free-form HTML found on the internet is not valid anyway. HTML libraries do a good job of dealing with these as well, and were tested for many of these common cases.
Very often it is impossible to match a part of the data without parsing it as a whole. For example, you might be looking for all titles, and end up matching inside a comment or a string literal. <h1>.*?</h1> may be a bold attempt at finding the main title, but it might find:
<!-- <h1>not the title!</h1> -->
Or even:
<script>
var s = "Certainly <h1>not the title!</h1>";
</script>
Last point is the most important:
Using a dedicated HTML parser is better than any regex you can come up with. Very often, XPath allows a better expressive way of finding the data you need, and using an HTML parser is much easier than most people realize.
A good summary of the subject, and an important comment on when mixing Regex and HTML may be appropriate, can be found in Jeff Atwood's blog: Parsing Html The Cthulhu Way.
When is it better to use a regular expression to parse HTML?
In most cases, it is better to use XPath on the DOM structure a library can give you. Still, against popular opinion, there are a few cases when I would strongly recommend using a regex and not a parser library:
Given a few of these conditions:
When you need a one-time update of your HTML files, and you know the structure is consistent.
When you have a very small snippet of HTML.
When you aren't dealing with an HTML file, but a similar templating engine (it can be very hard to find a parser in that case).
When you want to change parts of the HTML, but not all of it - a parser, to my knowledge, cannot answer this request: it will parse the whole document, and save a whole document, changing parts you never wanted to change.
Because HTML can have unlimited nesting of <tags><inside><tags and="<things><that><look></like></tags>"></inside></each></other> and regex can't really cope with that because it can't track a history of what it's descended into and come out of.
A simple construct that illustrates the difficulty:
<body><div id="foo">Hi there! <div id="bar">Bye!</div></div></body>
99.9% of generalized regex-based extraction routines will be unable to correctly give me everything inside the div with the ID foo, because they can't tell the closing tag for that div from the closing tag for the bar div. That is because they have no way of saying "okay, I've now descended into the second of two divs, so the next div close I see brings me back out one, and the one after that is the close tag for the first". Programmers typically respond by devising special-case regexes for the specific situation, which then break as soon as more tags are introduced inside foo and have to be unsnarled at tremendous cost in time and frustration. This is why people get mad about the whole thing.
A regular language is a language that can be matched by a finite state machine.
(Understanding Finite State machines, Push-down machines, and Turing machines is basically the curriculum of a fourth year college CS Course.)
Consider the following machine, which recognizes the string "hi".
(Start) --Read h-->(A)--Read i-->(Succeed)
\ \
\ -- read any other value-->(Fail)
-- read any other value-->(Fail)
This is a simple machine to recognize a regular language; Each expression in parenthesis is a state, and each arrow is a transition. Building a machine like this will allow you to test any input string against a regular language -- hence, a regular expression.
HTML requires you to know more than just what state you are in -- it requires a history of what you have seen before, to match tag nesting. You can accomplish this if you add a stack to the machine, but then it is no longer "regular". This is called a Push-down machine, and recognizes a grammar.
A regular expression is a machine with a finite (and typically rather small) number of discrete states.
To parse XML, C, or any other language with arbitrary nesting of language elements, you need to remember how deep you are. That is, you must be able to count braces/brackets/tags.
You cannot count with finite memory. There may be more brace levels than you have states! You might be able to parse a subset of your language that restricts the number of nesting levels, but it would be very tedious.
A grammar is a formal definition of where words can go. For example, adjectives preceed nouns in English grammar, but follow nouns en la gramática española.
Context-free means that the grammar works universally in all contexts. Context-sensitive means there are additional rules in certain contexts.
In C#, for example, using means something different in using System; at the top of files, than using (var sw = new StringWriter (...)). A more relevant example is the following code within code:
void Start ()
{
string myCode = #"
void Start()
{
Console.WriteLine (""x"");
}
";
}
There's another practical reason for not using regular expressions to parse XML and HTML that has nothing to do with the computer science theory at all: your regular expression will either be hideously complicated, or it will be wrong.
For example, it's all very well writing a regular expression to match
<price>10.65</price>
But if your code is to be correct, then:
It must allow whitespace after the element name in both start and end tag
If the document is in a namespace, then it should allow any namespace prefix to be used
It should probably allow and ignore any unknown attributes appearing in the start tag (depending on the semantics of the particular vocabulary)
It may need to allow whitespace before and after the decimal value (again, depending on the detailed rules of the particular XML vocabulary).
It should not match something that looks like an element, but is actually in a comment or CDATA section (this becomes especially important if there is a possibility of malicious data trying to fool your parser).
It may need to provide diagnostics if the input is invalid.
Of course some of this depends on the quality standards you are applying. We see a lot of problems on StackOverflow with people having to generate XML in a particular way (for example, with no whitespace in the tags) because it is being read by an application that requires it to be written in a particular way. If your code has any kind of longevity then it's important that it should be able to process incoming XML written in any way that the XML standard permits, and not just the one sample input document that you are testing your code on.
So others have gone and given brief definitions for most of these things, but I don't really think they cover WHY normal regex's are what they are.
There are some great resources on what a finite state machine is, but in short, a seminal paper in computer science proved that the basic grammar of regex's (the standard ones, used by grep, not the extended ones, like PCRE) can always be manipulated into a finite-state machine, meaning a 'machine' where you are always in a box, and have a limited number of ways to move to the next box. In short, you can always tell what the next 'thing' you need to do is just by looking at the current character. (And yes, even when it comes to things like 'match at least 4, but no more than 5 times', you can still create a machine like this) (I should note that note that the machine I describe here is technically only a subtype of finite-state machines, but it can implement any other subtype, so...)
This is great because you can always very efficiently evaluate such a machine, even for large inputs. Studying these sorts of questions (how does my algorithm behave when the number of things I feed it gets big) is called studying the computational complexity of the technique. If you're familiar with how a lot of calculus deals with how functions behave as they approach infinity, well, that's pretty much it.
So whats so great about a standard regular expression? Well, any given regex can match a string of length N in no more than O(N) time (meaning that doubling the length of your input doubles the time it takes: it says nothing about the speed for a given input) (of course, some are faster: the regex * could match in O(1), meaning constant, time). The reason is simple: remember, because the system has only a few paths from each state, you never 'go back', and you only need to check each character once. That means even if I pass you a 100 gigabyte file, you'll still be able to crunch through it pretty quickly: which is great!.
Now, its pretty clear why you can't use such a machine to parse arbitrary XML: you can have infinite tags-in-tags, and to parse correctly you need an infinite number of states. But, if you allow recursive replaces, a PCRE is Turing complete: so it could totally parse HTML! Even if you don't, a PCRE can parse any context-free grammar, including XML. So the answer is "yeah, you can". Now, it might take exponential time (you can't use our neat finite-state machine, so you need to use a big fancy parser that can rewind, which means that a crafted expression will take centuries on a big file), but still. Possible.
But lets talk real quick about why that's an awful idea. First of all, while you'll see a ton of people saying "omg, regex's are so powerful", the reality is... they aren't. What they are is simple. The language is dead simple: you only need to know a few meta-characters and their meanings, and you can understand (eventually) anything written in it. However, the issue is that those meta-characters are all you have. See, they can do a lot, but they're meant to express fairly simple things concisely, not to try and describe a complicated process.
And XML sure is complicated. It's pretty easy to find examples in some of the other answers: you can't match stuff inside comment fields, ect. Representing all of that in a programming language takes work: and that's with the benefits of variables and functions! PCRE's, for all their features, can't come close to that. Any hand-made implementation will be buggy: scanning blobs of meta-characters to check matching parenthesis is hard, and it's not like you can comment your code. It'd be easier to define a meta-language, and compile that down to a regex: and at that point, you might as well just take the language you wrote your meta-compiler with and write an XML parser. It'd be easier for you, faster to run, and just better overall.
For more neat info on this, check out this site. It does a great job of explaining all this stuff in layman's terms.
Don't parse XML/HTML with regex, use a proper XML/HTML parser and a powerful xpath query.
theory :
According to the compiling theory, XML/HTML can't be parsed using regex based on finite state machine. Due to hierarchical construction of XML/HTML you need to use a pushdown automaton and manipulate LALR grammar using tool like YACC.
realLife©®™ everyday tool in a shell :
You can use one of the following :
xmllint often installed by default with libxml2, xpath1 (check my wrapper to have newlines delimited output
xmlstarlet can edit, select, transform... Not installed by default, xpath1
xpath installed via perl's module XML::XPath, xpath1
xidel xpath3
saxon-lint my own project, wrapper over #Michael Kay's Saxon-HE Java library, xpath3
or you can use high level languages and proper libs, I think of :
python's lxml (from lxml import etree)
perl's XML::LibXML, XML::XPath, XML::Twig::XPath, HTML::TreeBuilder::XPath
ruby nokogiri, check this example
php DOMXpath, check this example
Check: Using regular expressions with HTML tags
In a purely theoretical sense, it is impossible for regular expressions to parse XML. They are defined in a way that allows them no memory of any previous state, thus preventing the correct matching of an arbitrary tag, and they cannot penetrate to an arbitrary depth of nesting, since the nesting would need to be built into the regular expression.
Modern regex parsers, however, are built for their utility to the developer, rather than their adherence to a precise definition. As such, we have things like back-references and recursion that make use of knowledge of previous states. Using these, it is remarkably simple to create a regex that can explore, validate, or parse XML.
Consider for example,
(?:
<!\-\-[\S\s]*?\-\->
|
<([\w\-\.]+)[^>]*?
(?:
\/>
|
>
(?:
[^<]
|
(?R)
)*
<\/\1>
)
)
This will find the next properly formed XML tag or comment, and it will only find it if it's entire contents are properly formed. (This expression has been tested using Notepad++, which uses Boost C++'s regex library, which closely approximates PCRE.)
Here's how it works:
The first chunk matches a comment. It's necessary for this to come first so that it will deal with any commented-out code that otherwise might cause hang ups.
If that doesn't match, it will look for the beginning of a tag. Note that it uses parentheses to capture the name.
This tag will either end in a />, thus completing the tag, or it will end with a >, in which case it will continue by examining the tag's contents.
It will continue parsing until it reaches a <, at which point it will recurse back to the beginning of the expression, allowing it to deal with either a comment or a new tag.
It will continue through the loop until it arrives at either the end of the text or at a < that it cannot parse. Failing to match will, of course, cause it to start the process over. Otherwise, the < is presumably the beginning of the closing tag for this iteration. Using the back-reference inside a closing tag <\/\1>, it will match the opening tag for the current iteration (depth). There's only one capturing group, so this match is a simple matter. This makes it independent of the names of the tags used, although you could modify the capturing group to capture only specific tags, if you need to.
At this point it will either kick out of the current recursion, up to the next level or end with a match.
This example solves problems dealing with whitespace or identifying relevant content through the use of character groups that merely negate < or >, or in the case of the comments, by using [\S\s], which will match anything, including carriage returns and new lines, even in single-line mode, continuing until it reaches a
-->. Hence, it simply treats everything as valid until it reaches something meaningful.
For most purposes, a regex like this isn't particularly useful. It will validate that XML is properly formed, but that's all it will really do, and it doesn't account for properties (although this would be an easy addition). It's only this simple because it leaves out real world issues like this, as well as definitions of tag names. Fitting it for real use would make it much more of a beast. In general, a true XML parser would be far superior. This one is probably best suited for teaching how recursion works.
Long story short: use an XML parser for real work, and use this if you want to play around with regexes.

Parsing Street Address Using RegEx

I know there are many questions asked on this topic. I am trying to parse and fetch street addresses from html page. The format of these page do not follow any patterns. Can someone help me in comming up with a regex that would match a street address, irrespective of the number of tags between them? Are there any other ways to do this other than using regular expressions?
Before you get all traditional let me share my experience. I've parsed over 1 million web pages in this way in Java. When I need small pieces out of a page it is perfect when paired with a replace to strip tags. In fact, it is more efficient and faster, especially when using Java's great replaceAll() function to strip tags. Build a fork join pool of both and test some parsing, you won't believe your eyes. I've added that part at the end. This is not the full regex but a starting point since it would take some trial and error to build. I believe the statement was, a bunch of pages with no clear route to the address.
So, yes, there are ways. What follows is a bit of an introduction to thinking about this in regex.
Words and groups of words are always in a pattern otherwise they aren't readable. Still, there are several things to note. Addresses can very greatly so it is important to continue building out a regex. The next thing, if you have access to a CAS engine, use it for anything you get. It standardizes your address.
As a must, have you tried xml, it will narrow everything and can help get rid of tags before you format. You need to narrow everything. If you are using java or python, run this step in a ForkJoinPool or MultiprocessingPool.
Your process should be:
Narrow if possible
Execute a regex that exploits formatting
Lastly, here is a regex cheat sheet.
Keep in mind. I don't know what websites you are using or their formats. I have personally had to pull this data with different per site regexes but that was for odd formats and other issues present with websites that run like databases of a certain variety.
That said, an address has a format of numbers, then street address and apartment number of pretty much anything, then city, state, then zip code. Basically it is \d+ then any combination of letters and numbers.
So (in java with double backslashes) to start you off:
[\\d]+[A-Za-z0-9\\s,\\.]+
If you want to start at but exclude tags to narrow your search if not using xml, use:
(?<=start)[\\d]+[A-Za-z0-9\\s,\\.]+?(?=end)
Html pages always seem to have tags so that would be something like
(?<=>)[\\d]+[A-Za-z0-9\\s,\\.]+?(?=<)
You may be able to use a zip code as your ending place if there is a multi-part zipcode.
[\\d]+[A-Za-z0-9\\s,\\.]+?[\\d\\-]+
As a final note, you can chain together regexes with a pipe delimeter, e.g.:
(?<=start)[\\d]+[A-Za-z0-9\\s,\\.]+?[\\d\\-]+|(?<=start)[A-Za-z0-9\\s,\\.]+?(?=end)
If this is not narrow enough there are several additional steps:
compare your results (average word length and etc.) and throw out any great outliers
write a formatter script per site to do cleanup that uses single or multi-threading to replace what you don't need.
You will probably need to strip out html as well. Run this regex in a replace statement to do that.
<.*?>
If you have trouble, use something like my regex tester (the website not my own) to build your regex.
Having worked on this problem quite extensively at SmartyStreets, I will tell you "NO" to parsing/finding street addresses with a regex.
Addresses are not a regular language and cannot be matched by a regular expression.
To solve the problem, we developed an API which actually finds and extracts addresses, with notably high accuracy. It's free for low-volume use. (It was not an easy problem to solve.) You can try it for free on the homepage demo. And no, this is not a solicitation. If you want to learn more about street addresses in any amount of detail from very basic to very technical, just email us because we want to educate the community about addresses.
To extract addresses, there are regular expressions under the hood, but results are biased strongly toward those which actually verify, meaning which actually exist. In other words, this is a parser performing complex operations to find and match addresses.
This answer to a very similar question is related, and you may find it useful. The other answers highlight some important points about the difficulties and solutions for parsing street addresses...

Regex performance issues - Profanity filter

I'm making a profanity filter (bad idea I know), and I'm trying to do it with regex in Java.
Right now here's my regex example string, this would filter 2 words, foo and bar.
(?i)f(?>[.,;:*`'^+~\\/#|]*+|.)o(?>[.,;:*`'^+~\\/#|]*+|.)o|b(?>[.,;:*`'^+~\\/#|]*+|.)a(?>[.,;:*`'^+~\\/#|]*+|.)r
Basically, I have it ignore case, then I put (?>[.,;:*'^+~\\/#|]*+|.) in between each letter of a curse word, and | between each complete curse word regex.
It works, but it's sorta slow.
If I have 6 words in the filter, it will filter a fairly long string (500 characters) in 939,548 nanoseconds. When I have 12, it just about doubles.
So, about 1ms per 6 curse words with this. But my filter will have hundreds (400 or so).
Calculating this, it would take about 66ms to filter this long string.
This is a chat server I'm building, and if I have lots of users on (say, 5,000) and 1 out of 5 are chatting in 1 second (1,000 chat messages) I need to filter a message in about 1ms.
Am I asking too much of regexps? Would it be faster to make my own specialized type of filter by hand? Are there ways to optimize this?
I am precompiling the regex.
If you want to see the effect of this regex http://regexr.com?30454
Update: Another thing I could to is have chat messages filtered client side in actionscript.
Update: I believe the only way to achieve such degree of performance would be a hand-coded solution without using regexps sadly, so I'll have to do a more basic filter.
To answer your question "am I asking too much of regexps?"- Yes
I spent the better part of 2 years working on a profanity filter using regular expressions and finally gave up. During this time, I tried all of these things:
Pre-compiling
Character classes (punctuation, whitespace, etc)
Non-capturing groups (mentioned above and can greatly reduce memory and increase speed)
Combining similar regexps (also mentioned above)
Trimming whitespace (str.trim())
Case handling (str.toLowerCase())
Packing and unpacking whitespace (convert multiple adjacent whitespace to a single space and vice-versa)
Writing my own custom regexp engine (highly unrecommended as it is complex and not scalable)
Nothing worked well and as my blacklist grew my system slowed down. In the end I gave up and implemented a linear analysis filter, which is now the core part of CleanSpeak, my company's profanity filtering product.
We found that we were also able to do some great multi-threading and other optimizations once we stopped using regexps and went from handling 600-700 messages per second to 10,000+ messages per second.
Lastly, we also found that performing linear analysis made the filter more accurate and allowed us to solve the "scunthrope problem" and many of the other ones people have mentioned in the comments here.
You can definitely try all of the things I mention above and see if you can get your performance up, but it is a hard problem to solve because regexps weren't really designed for language analysis. They were designed for text analysis, which is a very different problem.
Can you make use of any of the built-in character classes, e.g.
\bf\W?o\W?o\W?\b
to detect "foo" with any non-letters between the letters, but not "food" or "snafoo" (sic)
However, the weakness of this is that "_" counts as a word character :-(
I think a more promising approach is to use a simple, fast filter with some false positives, then re-test the positives against a more rigorous filter. Unless your users are total potty-mouths, there shouldn't be all that many detailed checks needed.
Update: Thought of this after I went home, but Qtax got there first (see other answer) - try removing all the punctuation first, then run plain word patterns on the text. This should make the word patterns much simpler and faster, especially when you have a lot of words to test.
Finally, note that within [] you don't need to escape regex special characters, so:
[.,;:*`'^+~\\/#|]
is OK (backslash still needs escaping)
When you have many words, group them by their first equal characters, and you should see less than linear time increase for added words.
I mean that if you have two words "foobar" and "fook" make a regex formed like foo(?:bar|k).
Using non-backtracking groups instead of non-capturing might increase performance. I.e. replace (?:...) with (?>...).
Another suggestion could be to just remove all the punctuation in the string first and then you could apply a simpler expression.
Also, if you can, try to apply the expression on longer strings. As that would probably be faster than doing one message at a time. Maybe combine several messages for a first check.
you can try to replace all
[.,;:*`'^+~\\/#|]+
with empty strings and then simply check for
\b(foo|bar)\b
UPDATE:
if you are more paranoid about the spaces f( *+)o\1o|b( *+)a\2r
or more paranoid in general f([^o]?)o\1o|b([^a]?)a\2r

Alternatives to Regular Expressions

I have a set of strings with numbers embedded in them. They look something like /cal/long/3/4/145:999 or /pa/metrics/CosmicRay/24:4:bgp:EnergyKurtosis. I'd like to have an expression parser that is
Easy to use. Given a few examples someone should be able to form a new expression. I want end users to be able to form new expressions to query this set of strings. Some of the potential users are software engineers, others are testers and some are scientists.
Allows for constraints on numbers. Something like '/cal/long/3/4/143:#>100&<1110' to specify that a string prefix with '/cal/long/3/4/143:' and then a number between (100,1110) is expected.
Supports '|' and . So the expression '/cal/(long|short)/3/4/' would match '/cal/long/3/4/1:2' as well as '/cal/short/3/4/1:2'.
Has a Java implementation available or would be easy to implement in Java.
Interesting alternative ideas would be useful. I'm also entertaining the idea of just implementing the subset of regular expressions that I need plus the numerical constraints.
Thanks!
There's no reason to reinvent the wheel! The core of a regular expression engine is built on a strong foundation of mathematics and computer science; the reason we continue to use them today is they are principally sound and won't be improved in the foreseeable future.
If you do find or create some alternative parsing language that only covers a subset of the possibilities Regex can, you will quickly have a user asking for a concept that can be expressed in Regex but your flavor just plain leaves out. Spend your time solving problems that haven't been solved instead!
I'm inclined to agree with Rex M, although your second requirement for numerical constraints complicates things. Unless you only allowed very basic constraints, I'm not aware of a way to succinctly express that in a regular expression. If there is such a way, please disregard the rest of my answer and follow the other suggestions here. :)
You might want to consider a parser generator - things like the classic lex and yacc. I'm not really familiar with the Java choices, but here's a list:
http://java-source.net/open-source/parser-generators
If you're not familiar, the standard approach would be to first create a lexer that turns your strings into tokens. Then you would pass those tokens onto a parser that applies your grammar to them and spits out some kind of result.
In your case, I envision the parser resulting in a combination of a regular expression and additional conditions. For your numerical constraint example, it might give you the regular expression \/cal/long/3/4/143:(\d+)\ and a constraint to apply to the first grouping (the \d+ portion) that requires that the number lie between 100 and 1100. You'd then apply the RE to your strings for candidates, and apply the constraint to those candidates to find your matches.
It's a pretty complicated approach, so hopefully there's a simpler way. I hope that gives you some ideas, at least.
The Java constraint is a severe one. I would recommend using parsing combinators, but you will have to translate the ideas to Java using classes instead of functions. There are many, many papers available on this topic; one of the easiest to approach is Graham Hutton's Higher-Order Functions for Parsing. Hutton's approach makes it especially easy to decide to succeed or fail based on conditions like the magnitude of a number, as you show in your example.
Unfortunately, not all programmers (myself included) are as familiar with RegEx as they ought be. This often means we end up writing our own string-parsing logic where RegEx could otherwise have served us well.
This isn't always bad. It's possible in some cases to write a DSL (a class, a cohesive set of methods) that's more elegant and readable and meets the precise needs of your problem domain. The trouble is that it can take dozens of iterations to distill the problem into a DSL that is simple and intuitive. And only if the DSL will be used far and wide in the application or by a large community is this trouble warranted. Don't write a elegant solution to a problem that only appears sporadically.
If you're going to go the parser route, check out GOLD Parsing System. It's often a better option than something like YACC, cleaner looking than pure regexes, and supports Java.
http://goldparser.org/about/how-it-works.htm
Actually what you described is the Java Pattern Matcher. Which just happens to use Regex as its language.

Categories

Resources