Source Code Cross Referenced for LexerTestUtilities.java in  » IDE-Netbeans » lexer » org » netbeans » lib » lexer » test » Java Source Code / Java DocumentationJava Source Code and Java Documentation

Java Source Code / Java Documentation
1. 6.0 JDK Core
2. 6.0 JDK Modules
3. 6.0 JDK Modules com.sun
4. 6.0 JDK Modules com.sun.java
5. 6.0 JDK Modules sun
6. 6.0 JDK Platform
7. Ajax
8. Apache Harmony Java SE
9. Aspect oriented
10. Authentication Authorization
11. Blogger System
12. Build
13. Byte Code
14. Cache
15. Chart
16. Chat
17. Code Analyzer
18. Collaboration
19. Content Management System
20. Database Client
21. Database DBMS
22. Database JDBC Connection Pool
23. Database ORM
24. Development
25. EJB Server geronimo
26. EJB Server GlassFish
27. EJB Server JBoss 4.2.1
28. EJB Server resin 3.1.5
29. ERP CRM Financial
30. ESB
31. Forum
32. GIS
33. Graphic Library
34. Groupware
35. HTML Parser
36. IDE
37. IDE Eclipse
38. IDE Netbeans
39. Installer
40. Internationalization Localization
41. Inversion of Control
42. Issue Tracking
43. J2EE
44. JBoss
45. JMS
46. JMX
47. Library
48. Mail Clients
49. Net
50. Parser
51. PDF
52. Portal
53. Profiler
54. Project Management
55. Report
56. RSS RDF
57. Rule Engine
58. Science
59. Scripting
60. Search Engine
61. Security
62. Sevlet Container
63. Source Control
64. Swing Library
65. Template Engine
66. Test Coverage
67. Testing
68. UML
69. Web Crawler
70. Web Framework
71. Web Mail
72. Web Server
73. Web Services
74. Web Services apache cxf 2.0.1
75. Web Services AXIS2
76. Wiki Engine
77. Workflow Engines
78. XML
79. XML UI
Java
Java Tutorial
Java Open Source
Jar File Download
Java Articles
Java Products
Java by API
Photoshop Tutorials
Maya Tutorials
Flash Tutorials
3ds-Max Tutorials
Illustrator Tutorials
GIMP Tutorials
C# / C Sharp
C# / CSharp Tutorial
C# / CSharp Open Source
ASP.Net
ASP.NET Tutorial
JavaScript DHTML
JavaScript Tutorial
JavaScript Reference
HTML / CSS
HTML CSS Reference
C / ANSI-C
C Tutorial
C++
C++ Tutorial
Ruby
PHP
Python
Python Tutorial
Python Open Source
SQL Server / T-SQL
SQL Server / T-SQL Tutorial
Oracle PL / SQL
Oracle PL/SQL Tutorial
PostgreSQL
SQL / MySQL
MySQL Tutorial
VB.Net
VB.Net Tutorial
Flash / Flex / ActionScript
VBA / Excel / Access / Word
XML
XML Tutorial
Microsoft Office PowerPoint 2007 Tutorial
Microsoft Office Excel 2007 Tutorial
Microsoft Office Word 2007 Tutorial
Java Source Code / Java Documentation » IDE Netbeans » lexer » org.netbeans.lib.lexer.test 
Source Cross Referenced  Class Diagram Java Document (Java Doc) 


001:        /*
002:         * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER.
003:         *
004:         * Copyright 1997-2007 Sun Microsystems, Inc. All rights reserved.
005:         *
006:         * The contents of this file are subject to the terms of either the GNU
007:         * General Public License Version 2 only ("GPL") or the Common
008:         * Development and Distribution License("CDDL") (collectively, the
009:         * "License"). You may not use this file except in compliance with the
010:         * License. You can obtain a copy of the License at
011:         * http://www.netbeans.org/cddl-gplv2.html
012:         * or nbbuild/licenses/CDDL-GPL-2-CP. See the License for the
013:         * specific language governing permissions and limitations under the
014:         * License.  When distributing the software, include this License Header
015:         * Notice in each file and include the License file at
016:         * nbbuild/licenses/CDDL-GPL-2-CP.  Sun designates this
017:         * particular file as subject to the "Classpath" exception as provided
018:         * by Sun in the GPL Version 2 section of the License file that
019:         * accompanied this code. If applicable, add the following below the
020:         * License Header, with the fields enclosed by brackets [] replaced by
021:         * your own identifying information:
022:         * "Portions Copyrighted [year] [name of copyright owner]"
023:         *
024:         * Contributor(s):
025:         *
026:         * The Original Software is NetBeans. The Initial Developer of the Original
027:         * Software is Sun Microsystems, Inc. Portions Copyright 1997-2007 Sun
028:         * Microsystems, Inc. All Rights Reserved.
029:         *
030:         * If you wish your version of this file to be governed by only the CDDL
031:         * or only the GPL Version 2, indicate your decision by adding
032:         * "[Contributor] elects to include this software in this distribution
033:         * under the [CDDL or GPL Version 2] license." If you do not indicate a
034:         * single choice of license, a recipient has the option to distribute
035:         * your version of this file under either the CDDL, the GPL Version 2 or
036:         * to extend the choice of license to its licensees as provided above.
037:         * However, if you add GPL Version 2 code and therefore, elected the GPL
038:         * Version 2 license, then the option applies only if the new code is
039:         * made subject to such option by the copyright holder.
040:         */
041:
042:        package org.netbeans.lib.lexer.test;
043:
044:        import java.lang.reflect.Field;
045:        import java.util.ArrayList;
046:        import java.util.Collection;
047:        import java.util.Iterator;
048:        import java.util.List;
049:        import javax.swing.event.DocumentEvent;
050:        import javax.swing.event.DocumentListener;
051:        import javax.swing.text.BadLocationException;
052:        import javax.swing.text.Document;
053:        import junit.framework.TestCase;
054:        import org.netbeans.api.lexer.Language;
055:        import org.netbeans.api.lexer.Token;
056:        import org.netbeans.api.lexer.TokenHierarchyEvent;
057:        import org.netbeans.api.lexer.TokenHierarchyListener;
058:        import org.netbeans.api.lexer.TokenHierarchy;
059:        import org.netbeans.api.lexer.TokenId;
060:        import org.netbeans.api.lexer.TokenSequence;
061:        import org.netbeans.api.lexer.TokenUtilities;
062:        import org.netbeans.junit.NbTestCase;
063:        import org.netbeans.lib.lexer.LexerApiPackageAccessor;
064:        import org.netbeans.lib.lexer.LexerUtilsConstants;
065:        import org.netbeans.lib.lexer.TokenList;
066:        import org.netbeans.lib.lexer.test.dump.TokenDumpCheck;
067:
068:        /**
069:         * Various utilities related to lexer's and token testing.
070:         *
071:         * @author mmetelka
072:         */
073:        public final class LexerTestUtilities {
074:
075:            /** Flag for additional correctness checks (may degrade performance). */
076:            private static final boolean testing = Boolean
077:                    .getBoolean("netbeans.debug.lexer.test");
078:
079:            private static final String LAST_TOKEN_HIERARCHY = "last-token-hierarchy";
080:
081:            private static Field tokenListField;
082:
083:            private LexerTestUtilities() {
084:                // no instances
085:            }
086:
087:            public static void assertConsistency(TokenHierarchy<?> hi) {
088:                String error = LexerApiPackageAccessor.get()
089:                        .tokenHierarchyOperation(hi).checkConsistency();
090:                if (error != null) {
091:                    TestCase.fail("Consistency error:\n" + error);
092:                }
093:            }
094:
095:            /**
096:             * @see #assertTokenEquals(String, TokenSequence, TokenId, String, int)
097:             */
098:            public static void assertTokenEquals(TokenSequence<?> ts,
099:                    TokenId id, String text, int offset) {
100:                assertTokenEquals(null, ts, id, text, offset);
101:            }
102:
103:            /**
104:             * Compare <code>TokenSequence.token()</code> to the given
105:             * token id, text and offset.
106:             *
107:             * @param offset expected offset. It may be -1 to prevent offset testing.
108:             */
109:            public static void assertTokenEquals(String message,
110:                    TokenSequence<?> ts, TokenId id, String text, int offset) {
111:                message = messagePrefix(message);
112:                Token<?> t = ts.token();
113:                TestCase.assertNotNull("Token is null", t);
114:                TokenId tId = t.id();
115:                TestCase.assertEquals(message
116:                        + "Invalid token.id() for text=\""
117:                        + debugTextOrNull(t.text()) + '"', id, tId);
118:                CharSequence tText = t.text();
119:                assertTextEquals(message + "Invalid token.text() for id="
120:                        + LexerUtilsConstants.idToString(id), text, tText);
121:                // The token's length must correspond to text.length()
122:                TestCase.assertEquals(message + "Invalid token.length()", text
123:                        .length(), t.length());
124:
125:                if (offset != -1) {
126:                    int tsOffset = ts.offset();
127:                    TestCase.assertEquals(message
128:                            + "Invalid tokenSequence.offset()", offset,
129:                            tsOffset);
130:
131:                    // It should also be true that if the token is non-flyweight then
132:                    // ts.offset() == t.offset()
133:                    // and if it's flyweight then t.offset() == -1
134:                    int tOffset = t.offset(null);
135:                    assertTokenOffsetMinusOneForFlyweight(t.isFlyweight(),
136:                            tOffset);
137:                    if (!t.isFlyweight()) {
138:                        assertTokenOffsetsEqual(message, tOffset, offset);
139:                    }
140:                }
141:            }
142:
143:            public static void assertTokenEquals(TokenSequence<?> ts,
144:                    TokenId id, String text, int offset, int lookahead,
145:                    Object state) {
146:                assertTokenEquals(null, ts, id, text, offset, lookahead, state);
147:            }
148:
149:            public static void assertTokenEquals(String message,
150:                    TokenSequence<?> ts, TokenId id, String text, int offset,
151:                    int lookahead, Object state) {
152:                assertTokenEquals(message, ts, id, text, offset);
153:
154:                Token t = ts.token();
155:                message = messagePrefix(message);
156:                TestCase.assertEquals(message + "Invalid token.lookahead()",
157:                        lookahead, lookahead(ts));
158:                TestCase.assertEquals(message + "Invalid token.state()", state,
159:                        state(ts));
160:            }
161:
162:            public static void assertTokenOffsetsEqual(String message,
163:                    int offset1, int offset2) {
164:                if (offset1 != -1 && offset2 != -1) { // both non-flyweight
165:                    TestCase.assertEquals(messagePrefix(message)
166:                            + "Offsets equal", offset1, offset2);
167:                }
168:            }
169:
170:            public static void assertTokenFlyweight(Token token) {
171:                TestCase.assertEquals("Token flyweight", true, token
172:                        .isFlyweight());
173:            }
174:
175:            public static void assertTokenNotFlyweight(Token token) {
176:                TestCase.assertEquals("Token not flyweight", true, !token
177:                        .isFlyweight());
178:            }
179:
180:            private static void assertTokenOffsetMinusOneForFlyweight(
181:                    boolean tokenFlyweight, int offset) {
182:                if (tokenFlyweight) {
183:                    TestCase.assertEquals(
184:                            "Flyweight token => token.offset()=-1", -1, offset);
185:                } else { // non-flyweight
186:                    TestCase.assertTrue(
187:                            "Non-flyweight token => token.offset()!=-1 but "
188:                                    + offset, (offset != -1));
189:                }
190:            }
191:
192:            /**
193:             * Assert that the next token in the token sequence
194:             */
195:            public static void assertNextTokenEquals(TokenSequence<?> ts,
196:                    TokenId id, String text) {
197:                assertNextTokenEquals(null, ts, id, text);
198:            }
199:
200:            public static void assertNextTokenEquals(String message,
201:                    TokenSequence<?> ts, TokenId id, String text) {
202:                String messagePrefix = messagePrefix(message);
203:                TestCase.assertTrue(messagePrefix + "No next token available",
204:                        ts.moveNext());
205:                assertTokenEquals(message, ts, id, text, -1);
206:            }
207:
208:            /**
209:             * @see #assertTokenSequencesEqual(String,TokenSequence,TokenHierarchy,TokenSequence,TokenHierarchy,boolean)
210:             */
211:            public static void assertTokenSequencesEqual(
212:                    TokenSequence<?> expected, TokenHierarchy<?> expectedHi,
213:                    TokenSequence<?> actual, TokenHierarchy<?> actualHi,
214:                    boolean testLookaheadAndState) {
215:                assertTokenSequencesEqual(null, expected, expectedHi, actual,
216:                        actualHi, testLookaheadAndState);
217:            }
218:
219:            /**
220:             * Compare contents of the given token sequences by moving through all their
221:             * tokens.
222:             * <br/>
223:             * Token hierarchies are given to check implementations
224:             * of the Token.offset(TokenHierarchy) - useful for checking of token snapshots.
225:             *
226:             * @param message message to display (may be null).
227:             * @param expected non-null token sequence to be compared to the other token sequence.
228:             * @param expectedHi token hierarchy to which expected relates.
229:             * @param actual non-null token sequence to be compared to the other token sequence.
230:             * @param actualHi token hierarchy to which actual relates.
231:             * @param testLookaheadAndState whether lookahead and states should be checked
232:             *  or not. Generally it should be true but for snapshots checking it must
233:             *  be false because snapshots do not hold lookaheads and states.
234:             */
235:            public static void assertTokenSequencesEqual(String message,
236:                    TokenSequence<?> expected, TokenHierarchy<?> expectedHi,
237:                    TokenSequence<?> actual, TokenHierarchy<?> actualHi,
238:                    boolean testLookaheadAndState) {
239:                boolean success = false;
240:                try {
241:                    String prefix = messagePrefix(message);
242:                    TestCase.assertEquals(prefix + "Move previous: ", expected
243:                            .movePrevious(), actual.movePrevious());
244:                    while (expected.moveNext()) {
245:                        TestCase.assertTrue(prefix + "Move next: ", actual
246:                                .moveNext());
247:                        assertTokensEqual(message, expected, expectedHi,
248:                                actual, actualHi, testLookaheadAndState);
249:                    }
250:                    TestCase.assertFalse(prefix + "Move next not disabled",
251:                            actual.moveNext());
252:                    success = true;
253:                } finally {
254:                    if (!success) {
255:                        System.err.println("Expected token sequence dump:\n"
256:                                + expected);
257:                        System.err.println("Test token sequence dump:\n"
258:                                + actual);
259:                    }
260:                }
261:            }
262:
263:            private static void assertTokensEqual(String message,
264:                    TokenSequence<?> ts, TokenHierarchy tokenHierarchy,
265:                    TokenSequence<?> ts2, TokenHierarchy tokenHierarchy2,
266:                    boolean testLookaheadAndState) {
267:                Token<?> t = ts.token();
268:                Token<?> t2 = ts2.token();
269:
270:                message = messagePrefix(message);
271:                TestCase.assertEquals(message + "Invalid token id", t.id(), t2
272:                        .id());
273:                assertTextEquals(message + "Invalid token text", t.text(), t2
274:                        .text());
275:
276:                assertTokenOffsetsEqual(message, t.offset(tokenHierarchy), t2
277:                        .offset(tokenHierarchy2));
278:                TestCase.assertEquals(message + "Invalid tokenSequence offset",
279:                        ts.offset(), ts2.offset());
280:
281:                // Checking LOOKAHEAD and STATE matching in case they are filled in (during tests)
282:                if (testing && testLookaheadAndState) {
283:                    TestCase.assertEquals(
284:                            message + "Invalid token.lookahead()",
285:                            lookahead(ts), lookahead(ts2));
286:                    TestCase.assertEquals(message + "Invalid token.state()",
287:                            state(ts), state(ts2));
288:                }
289:                TestCase.assertEquals(message + "Invalid token length", t
290:                        .length(), t2.length());
291:                TestCase.assertEquals(message + "Invalid token part", t
292:                        .partType(), t2.partType());
293:            }
294:
295:            /**
296:             * Compute number of flyweight tokens in the given token sequence.
297:             *
298:             * @param ts non-null token sequence.
299:             * @return number of flyweight tokens in the token sequence.
300:             */
301:            public static int flyweightTokenCount(TokenSequence<?> ts) {
302:                int flyTokenCount = 0;
303:                ts.moveIndex(0);
304:                while (ts.moveNext()) {
305:                    if (ts.token().isFlyweight()) {
306:                        flyTokenCount++;
307:                    }
308:                }
309:                return flyTokenCount;
310:            }
311:
312:            /**
313:             * Compute total number of characters represented by flyweight tokens
314:             * in the given token sequence.
315:             *
316:             * @param ts non-null token sequence.
317:             * @return number of characters contained in the flyweight tokens
318:             *  in the token sequence.
319:             */
320:            public static int flyweightTextLength(TokenSequence<?> ts) {
321:                int flyTokenTextLength = 0;
322:                ts.moveIndex(0);
323:                while (ts.moveNext()) {
324:                    if (ts.token().isFlyweight()) {
325:                        flyTokenTextLength += ts.token().text().length();
326:                    }
327:                }
328:                return flyTokenTextLength;
329:            }
330:
331:            /**
332:             * Compute distribution of flyweight token lengths accross the given token sequence.
333:             *
334:             * @param ts non-null token sequence.
335:             * @return non-null list containing number of the flyweight tokens that have the length
336:             *  equal to the index in the list.
337:             */
338:            public static List<Integer> flyweightDistribution(
339:                    TokenSequence<?> ts) {
340:                List<Integer> distribution = new ArrayList<Integer>();
341:                ts.moveIndex(0);
342:                while (ts.moveNext()) {
343:                    if (ts.token().isFlyweight()) {
344:                        int len = ts.token().text().length();
345:                        while (distribution.size() <= len) {
346:                            distribution.add(0);
347:                        }
348:                        distribution.set(len, distribution.get(len) + 1);
349:                    }
350:                }
351:                return distribution;
352:            }
353:
354:            public static boolean collectionsEqual(Collection<?> c1,
355:                    Collection<?> c2) {
356:                return c1.containsAll(c2) && c2.containsAll(c1);
357:            }
358:
359:            public static void assertCollectionsEqual(Collection expected,
360:                    Collection actual) {
361:                assertCollectionsEqual(null, expected, actual);
362:            }
363:
364:            public static void assertCollectionsEqual(String message,
365:                    Collection expected, Collection actual) {
366:                if (!collectionsEqual(expected, actual)) {
367:                    message = messagePrefix(message);
368:                    for (Iterator it = expected.iterator(); it.hasNext();) {
369:                        Object o = it.next();
370:                        if (!actual.contains(o)) {
371:                            System.err.println(actual.toString());
372:                            TestCase.fail(message + " Object " + o
373:                                    + " not contained in tested collection");
374:                        }
375:                    }
376:                    for (Iterator it = actual.iterator(); it.hasNext();) {
377:                        Object o = it.next();
378:                        if (!expected.contains(o)) {
379:                            System.err.println(actual.toString());
380:                            TestCase.fail(message + " Extra object " + o
381:                                    + " contained in tested collection");
382:                        }
383:                    }
384:                    TestCase.fail("Collections not equal for unknown reason!");
385:                }
386:            }
387:
388:            public static void incCheck(Document doc, boolean nested) {
389:                TokenHierarchy<?> thInc = TokenHierarchy.get(doc);
390:                Language<?> language = (Language<?>) doc
391:                        .getProperty(Language.class);
392:                String docText = null;
393:                try {
394:                    docText = doc.getText(0, doc.getLength());
395:                } catch (BadLocationException e) {
396:                    e.printStackTrace();
397:                    TestCase.fail("BadLocationException occurred");
398:                }
399:                TokenHierarchy<?> thBatch = TokenHierarchy.create(docText,
400:                        language);
401:                boolean success = false;
402:                TokenSequence<?> batchTS = thBatch.tokenSequence();
403:                try {
404:                    // Compare lookaheads and states as well
405:                    assertTokenSequencesEqual(batchTS, thBatch, thInc
406:                            .tokenSequence(), thInc, true);
407:                    success = true;
408:                } finally {
409:                    if (!success) {
410:                        // Go forward two tokens to have an extra tokens context
411:                        batchTS.moveNext();
412:                        batchTS.moveNext();
413:                        System.err.println("BATCH token sequence dump:\n"
414:                                + thBatch.tokenSequence());
415:                        TokenHierarchy<?> lastHi = (TokenHierarchy<?>) doc
416:                                .getProperty(LAST_TOKEN_HIERARCHY);
417:                        if (lastHi != null) {
418:                            System.err
419:                                    .println("PREVIOUS batch token sequence dump:\n"
420:                                            + lastHi.tokenSequence());
421:                        }
422:                    }
423:                }
424:
425:                // Check the change since last modification
426:                TokenHierarchy<?> lastHi = (TokenHierarchy<?>) doc
427:                        .getProperty(LAST_TOKEN_HIERARCHY);
428:                if (lastHi != null) {
429:                    // TODO comparison
430:                }
431:                doc.putProperty(LAST_TOKEN_HIERARCHY, thBatch); // new last batch token hierarchy
432:            }
433:
434:            /**
435:             * Get lookahead for the token to which the token sequence is positioned.
436:             * <br/>
437:             * The method uses reflection to get reference to tokenList field in token sequence.
438:             */
439:            public static int lookahead(TokenSequence<?> ts) {
440:                return tokenList(ts).lookahead(ts.index());
441:            }
442:
443:            /**
444:             * Get state for the token to which the token sequence is positioned.
445:             * <br/>
446:             * The method uses reflection to get reference to tokenList field in token sequence.
447:             */
448:            public static Object state(TokenSequence<?> ts) {
449:                return tokenList(ts).state(ts.index());
450:            }
451:
452:            /**
453:             * Compare whether the two character sequences represent the same text.
454:             */
455:            public static boolean textEquals(CharSequence text1,
456:                    CharSequence text2) {
457:                return TokenUtilities.equals(text1, text2);
458:            }
459:
460:            public static void assertTextEquals(CharSequence expected,
461:                    CharSequence actual) {
462:                assertTextEquals(null, expected, actual);
463:            }
464:
465:            public static void assertTextEquals(String message,
466:                    CharSequence expected, CharSequence actual) {
467:                if (!textEquals(expected, actual)) {
468:                    TestCase.fail(messagePrefix(message) + " expected:\""
469:                            + expected + "\" but was:\"" + actual + "\"");
470:                }
471:            }
472:
473:            /**
474:             * Return the given text as String
475:             * translating the special characters (and '\') into escape sequences.
476:             *
477:             * @param text non-null text to be debugged.
478:             * @return non-null string containing the debug text.
479:             */
480:            public static String debugText(CharSequence text) {
481:                return TokenUtilities.debugText(text);
482:            }
483:
484:            /**
485:             * Return the given text as String
486:             * translating the special characters (and '\') into escape sequences.
487:             *
488:             * @param text non-null text to be debugged.
489:             * @return non-null string containing the debug text or "<null>".
490:             */
491:            public static String debugTextOrNull(CharSequence text) {
492:                return (text != null) ? debugText(text) : "<null>";
493:            }
494:
495:            public static void initLastDocumentEventListening(Document doc) {
496:                doc.addDocumentListener(new DocumentListener() {
497:                    public void insertUpdate(DocumentEvent evt) {
498:                        storeEvent(evt);
499:                    }
500:
501:                    public void removeUpdate(DocumentEvent evt) {
502:                        storeEvent(evt);
503:                    }
504:
505:                    public void changedUpdate(DocumentEvent evt) {
506:                        storeEvent(evt);
507:                    }
508:
509:                    private void storeEvent(DocumentEvent evt) {
510:                        evt.getDocument().putProperty(DocumentEvent.class, evt);
511:                    }
512:                });
513:            }
514:
515:            public static DocumentEvent getLastDocumentEvent(Document doc) {
516:                return (DocumentEvent) doc.getProperty(DocumentEvent.class);
517:            }
518:
519:            public static void initLastTokenHierarchyEventListening(Document doc) {
520:                TokenHierarchy hi = TokenHierarchy.get(doc);
521:                hi.addTokenHierarchyListener(TestTokenChangeListener.INSTANCE);
522:            }
523:
524:            public static TokenHierarchyEvent getLastTokenHierarchyEvent(
525:                    Document doc) {
526:                return (TokenHierarchyEvent) doc
527:                        .getProperty(TokenHierarchyEvent.class);
528:            }
529:
530:            /**
531:             * Get token list from the given token sequence for testing purposes.
532:             */
533:            public static <T extends TokenId> TokenList<T> tokenList(
534:                    TokenSequence<T> ts) {
535:                try {
536:                    if (tokenListField == null) {
537:                        tokenListField = ts.getClass().getDeclaredField(
538:                                "tokenList");
539:                        tokenListField.setAccessible(true);
540:                    }
541:                    @SuppressWarnings("unchecked")
542:                    TokenList<T> tl = (TokenList<T>) tokenListField.get(ts);
543:                    return tl;
544:                } catch (Exception e) {
545:                    TestCase.fail(e.getMessage());
546:                    return null; // never reached
547:                }
548:            }
549:
550:            private static String messagePrefix(String message) {
551:                if (message != null) {
552:                    message = message + ": ";
553:                } else {
554:                    message = "";
555:                }
556:                return message;
557:            }
558:
559:            /**
560:             * Set whether the lexer should run in testing mode where there are some
561:             * additional correctness checks performed.
562:             */
563:            public static void setTesting(boolean testing) {
564:                System.setProperty("netbeans.debug.lexer.test",
565:                        testing ? "true" : "false");
566:            }
567:
568:            /**
569:             * Check whether token descriptions dump file (a file with added suffix ".tokens.txt")
570:             * for the given input file exists and whether it has the same content
571:             * like the one obtained by lexing the input file.
572:             * <br/>
573:             * It allows to test whether the tested lexer still produces the same tokens.
574:             * <br/>
575:             * The method will only pass successfully if both the input file and token descriptions
576:             * files exist and the token descriptions file contains the same information
577:             * as the generated files.
578:             * <br/>
579:             * If the token descriptions file does not exist the method will create it.
580:             * <br/>
581:             * As the lexer's behavior at the EOF is important and should be well tested
582:             * there is a support for splitting input file virtually into multiple inputs
583:             * by virtual EOF - see <code>TokenDumpTokenId</code> for details.
584:             * <br/>
585:             * Also there is possibility to specify special chars
586:             * - see <code>TokenDumpTokenId</code> for details.
587:             *
588:             * @param test non-null test (used for calling test.getDataDir()).
589:             * @param relFilePath non-null file path relative to datadir of the test.
590:             *  <br/>
591:             *  For example if "testfiles/testinput.mylang.txt" gets passed the test method will
592:             *  search for <code>new File(test.getDataDir() + "testfiles/testinput.mylang.txt")</code>,
593:             *  read its content, lex it and create token descriptions. Then it will search for 
594:             *  <code>new File(test.getDataDir() + "testfiles/testinput.mylang.txt.tokens.txt")</code>
595:             *  and it will compare the file content with the generated descriptions.
596:             *  
597:             */
598:            public static void checkTokenDump(NbTestCase test,
599:                    String relFilePath, Language<?> language) throws Exception {
600:                TokenDumpCheck.checkTokenDump(test, relFilePath, language);
601:            }
602:
603:            private static final class TestTokenChangeListener implements 
604:                    TokenHierarchyListener {
605:
606:                static TestTokenChangeListener INSTANCE = new TestTokenChangeListener();
607:
608:                public void tokenHierarchyChanged(TokenHierarchyEvent evt) {
609:                    TokenHierarchy hi = evt.tokenHierarchy();
610:                    Document d = (Document) hi.inputSource();
611:                    d.putProperty(TokenHierarchyEvent.class, evt);
612:                }
613:
614:            }
615:        }
www.java2java.com | Contact Us
Copyright 2009 - 12 Demo Source and Support. All rights reserved.
All other trademarks are property of their respective owners.