TextRazor Java Reference

TextRazor's API helps you rapidly build state-of-the-art language processing technology into your application.

Our main analysis endpoint offers a simple combined call that allows you to perform several different analyses on the same document, for example extracting both the entities mentioned in the text and relations between them. The API allows callers to specify a number of extractors, which control the range of TextRazor's language analysis features.

If you have any queries please contact us at support@textrazor.com and we will get back to you promptly. We'd also love to hear from you if you have any ideas for improving the API or documentation.

We offer official Client SDKs in Python, Java, and PHP, or our REST API is easy to integrate in other languages.

Installation

The easiest way to install the TextRazor Java SDK is with Maven. To add the dependency to your project simply add the following to your pom.xml:


<dependency>
  <groupId>com.textrazor</groupId>
  <artifactId>textrazor</artifactId>
  <version>1.0.12</version>
</dependency>

Alternatively you can pick up the latest source or precompiled Jar from GitHub.

TextRazor for Java depends on the Jackson JSON library. Ensure that this is on your classpath when running your project (using Maven as above takes care of this for you).

import com.textrazor.TextRazor;
import com.textrazor.annotations.Entity;
import com.textrazor.annotations.AnalyzedText;

TextRazor client = new TextRazor(API_KEY);

client.addExtractor("words");
client.addExtractor("entities");

AnalyzedText response = client.analyze("LONDON - Barclays misled shareholders and the public RBS about one of the biggest investments in the bank's history, a BBC Panorama investigation has found.");

for (Entity entity : response.getResponse().getEntities()) {
    System.out.println("Matched Entity: " + entity.getEntityId());
}

                

Please see https://github.com/TextRazor/textrazor-java/blob/master/test/com/textrazor/TestTextRazor.java for a more detailed example of the TextRazor Java SDK.

Authentication

The TextRazor API identifies each of your requests by your unique API Key, which you can find in the console.

By default the Java SDK uses SSL connections to encrypt all communication with the TextRazor server.

Errors

The TextRazor Java SDK throws an AnalysisException whenever it is unable to process your request, or a NetworkException whenever it was unable to connect to TextRazor. These both come with a descriptive message explaining the problem.

We recommend that you design your application to gracefully retry sending failed requests several times before stopping and logging the error.

Best Practices

TextRazor was designed to work out of the box with a wide range of different types of content. However there are several steps you can take to help improve the system further for your specific application:

  • Experiment with different confidence score thresholds. Where possible, TextRazor will return scores for each of its annotations representing the amount of confidence the engine has in the result. If you prefer to avoid false-positives in your application you may want to ignore results below a certain threshold. The best way to find an appropriate threshold is to run a sample set of your documents through the system and manually inspect the results.
  • TextRazor's algorithms use the whole context of your document to understand its contents and disambiguate its analysis. Overall accuracy of the engine may be improved if long documents with multiple different themes are split up.
  • On the other hand, if you have numerous small pieces of content that are likely related, you may get better results by concatenating them before calling the API. This may be the case where you are analyzing multiple Tweets from one user, for example, or if you are separately analyzing the headline and body of a news story.

Please do not hesitate to contact us for help with getting the most out of the system for your use case.

API Reference

com.textrazor.TextRazor(java.lang.String apiKey) Object

All TextRazor functionality is exposed in the TextRazor class. Integrating into your project is simple. Create a TextRazor instance with your API key and the extractors you are interested in, then call analyze for each of your documents.

This class is threadsafe once initialized with the request options. You should create a new instance for each request if you are likely to be changing the request options in a multithreaded environment.

Analysis

AnalyzedText analyze(java.lang.String text)

Calls the TextRazor API with the provided UTF8 encoded text. Returns a Response with the analyzed metadata on success. Raises an AnalysisException or NetworkException on failure.

AnalyzedText analyze(java.lang.String text)

Calls the TextRazor API with the provided url.

TextRazor will first download the contents of this URL, and then process the resulting text.

TextRazor will only attempt to analyze text documents. Any invalid UTF-8 characters will be replaced with a space character and ignored. TextRazor limits the total download size to approximately 1M. Any larger documents will be truncated to that size, and a warning will be returned in the response.

By default, TextRazor will clean all HTML prior to processing. For more control of the cleanup process, see the setCleanupMode option.

Returns a Response with the analyzed metadata on success. Raises an AnalysisException or NetworkException on failure.

Analysis Options

setExtractors(java.util.List<java.lang.String> extractors)

Sets a list of “Extractors”, which tells TextRazor which analysis functions to perform on your text. For optimal performance, only select the extractors that are explicitly required by your application.

Valid Options: entities, topics, words, phrases, dependency-trees, relations, entailments, senses, spelling
setRules(java.lang.String rules)
String containing Prolog logic. All rules matching an extractor name listed in the request will be evaluated and all matching param combinations linked in the response.
setCleanupHTML(boolean cleanupHTML)
Deprecated - Please see setCleanupMode(java.lang.String cleanupMode)
When True, input text is treated as raw HTML and will be cleaned of tags, comments, scripts, and boilerplate content removed. When this option is enabled, the cleaned_text property is returned with the text content, providing access to the raw filtered text. When enabled, position offsets returned in individual words apply to the clean text, not the provided HTML.
setCleanupMode(java.lang.String cleanupMode)

Controls the preprocessing cleanup mode that TextRazor will apply to your content before analysis. For all options aside from "raw" any position offsets returned will apply to the final cleaned text, not the raw HTML. If the cleaned text is required please see the cleanup_return_cleaned option.

Valid Options:
raw
Content is analyzed "as-is", with no preprocessing.
stripTags
All Tags are removed from the document prior to analysis. This will remove all HTML, XML tags, but the content of headings, menus will remain. This is a good option for analysis of HTML pages that aren't long form documents.
cleanHTML
Boilerplate HTML is removed prior to analysis, including tags, comments, menus, leaving only the body of the article.
setCleanupReturnCleaned(boolean cleanupReturnCleaned)
When True, the TextRazor response will contain the cleaned_text property, the text it analyzed after preprocessing. To save bandwidth, only set this to True if you need it in your application. Defaults to False.
setCleanupReturnRaw(boolean cleanupReturnRaw)
When return_raw is True, the TextRazor response will contain the raw_text property, the original text TextRazor received or downloaded before cleaning. To save bandwidth, only set this to True if you need it in your application. Defaults to False.
setCleanupUseMetadata(boolean cleanupUseMetadata)

When use_metadata is True, TextRazor will use metadata extracted from your document to help in the disambiguation/extraction process. This include HTML titles and metadata, and can significantly improve results for shorter documents without much other content.

This option has no effect when cleanup_mode is 'raw'. Defaults to True.

setDownloadUserAgent(java.lang.String downloadUserAgent)

Sets the User-Agent header to be used when downloading over HTTP. This should be a descriptive string identifying your application, or an end user's browser user agent if you are performing live requests from a given user.

Defaults to "TextRazor Downloader (https://www.textrazor.com)"

setLanguageOverride(java.lang.String languageOverride)
When set to a ISO-639-2 language code, force TextRazor to analyze content with this language. If not set TextRazor will use the automatically identified language.
setDoCompression(boolean doCompression)
When True, request gzipped responses from TextRazor. When expecting a large response this can significantly reduce bandwidth. Defaults to True.
setDoEncryption(boolean doEncryption)
When True, all communication to TextRazor will be sent over SSL, when handling sensitive or private information this should be set to True. Defaults to True.
setEntityDictionaries(List<String> entityDictionaryIds)

Sets a list of the custom entity dictionaries to match against your content. Each item should be a string ID corresponding to dictionaries you have previously configured through the DictionaryManager interface.

setDbpediaTypeFilters(java.util.List<java.lang.String> dbpediaTypeFilters)
List of DBPedia types. All returned entities must match at least one of these types. For more information on TextRazor's type filtering, see http://www.textrazor.com/types. To account for inconsistencies in DBPedia and Freebase type information we recommend you filter on multiple types across both sources where possible.
setFreebaseTypeFilters(java.util.List<java.lang.String> freebaseTypeFilters)
List of Freebase types. All returned entities must match at least one of these types. For more information on TextRazor's type filtering, see http://www.textrazor.com/types. To account for inconsistencies in DBPedia and Freebase type information we recommend you filter on multiple types across both sources where possible.
setAllowOverlap(boolean allowOverlap)
When True entities in the response may overlap. When False, the "best" entity is found such that none overlap. Defaults to True.
setClassifiers(List<String> classifierIds)

Sets a list of classifiers to evaluate against your document. Each entry should be a string ID corresponding to either one of TextRazor's default classifiers, or one you have previously configured through the ClassifierManager interface.

If you aren't tied to a particular taxonomy version, the current textrazor_mediatopics_2023Q1 is a sound starting point for many classification projects.

Valid Options:
textrazor_iab_content_taxonomy_3.0
IAB Content Taxonomy v3.0 Internet Advertising Bureau Content Taxonomy v3.0 is the latest (2022) update of the IAB Content Taxonomy.
textrazor_iab_content_taxonomy_2.2
textrazor_iab_content_taxonomy
IAB Content Taxonomy v2 is an updated version (2017) of the IAB QAG segments.
textrazor_iab
textrazor_mediatopics_2023Q1
IPTC Media Topics - Latest (March 2023) version of IPTC's 1100-term taxonomy with a focus on text.
textrazor_mediatopics
IPTC Media Topics - Original (2017) version of the IPTC Media Topic taxonomy.
textrazor_newscodes
custom classifier name
Custom classifier, previously created through the Classifier Manager interface.

Response Object

getTime()
Total time in seconds TextRazor took to process this request. This does not include any time spent sending or recieving the request/response.
isOk()
True if TextRazor successfully analyzed your document, False if there was some error.
getError()
Descriptive error message of any problems that may have occurred during analysis, or an empty string if there was no error.
getMessage()
Any warning or informational messages returned from the server, or an empty string if there was no message.
getCustomAnnotationOutput()
Any output generated while running the embedded Prolog engine on your custom rules.
getEntailments()
List of all Entailment across all sentences in the response.
getEntities()
List of all the Entity across all sentences in the response.
getTopics()
List of all the Topic in the response.
getCategories()
List of all the ScoredCategory in the response.
getNounPhrases()
List of all the NounPhrase in the response.
getProperties()
List of all Property across all sentences in the response.
getRelations()
List of all Relation across all sentences in the response.
getSentences()
List of all Sentence in the response.
getLanguage()
The ISO-639-2 language used to analyze this document, either explicitly provided as the languageOverride, or as detected by the language detector.
getLanguageIsReliable()
Boolean indicating whether the language detector was confident of its classification. This may be false for shorter or ambiguous content.

All calls to analyze return a AnalyzedText object. This in turn contains a Response object with the response data.

com.textrazor.annotations.Entity Object

getEntityId()
The disambiguated ID for this entity, or None if this entity could not be disambiguated. This ID is from the localized Wikipedia for this document's language.
getEntityEnglishId()
The disambiguated entityId in the English Wikipedia, where a link between localized and English ID could be found. None if either the entity could not be linked, or where a language link did not exist.
getCustomEntityId()
The custom entity DictionaryEntry ID that matched this Entity, if this entity was matched in a custom dictionary.
getConfidenceScore()
The confidence that TextRazor is correct that this is a valid entity. TextRazor uses an ever increasing number of signals to help spot valid entities, all of which contribute to this score. For each entity we consider the semantic agreement between the context in the source text and our knowledgebase, compatibility between other entities in the text, compatibility between the expected entity type and context, and prior probabilities of having seen this entity across wikipedia and other web datasets. The score is technically unbounded, but typically ranges from 0.5 to 10, with 10 representing the highest confidence that this is a valid entity. Longer documents with more context will tend to have higher confidence scores than shorter tweets, so if you are choosing a confidence threshold it's a good idea to experiment with levels with your own data.
getDBPediaTypes()
List of Dbpedia types for this entity, or an empty list if there are none.
getFreebaseTypes()
List of Freebase types for this entity, or an empty list if there are none.
getFreebaseId()
The disambiguated Freebase ID for this entity, or None if either this entity could not be disambiguated, or a Freebase link doesn’t exist.
getWikidataId()
The disambiguated Wikidata QID for this entity, or None if either this entity could not be disambiguated, or a Wikidata link doesn’t exist.
getMatchingTokens()
List of the token positions in the current sentence that make up this entity.
getMatchingWords()
List of Word that make up this entity.
getMatchedText()
Source text string that matched this entity.
getData()
Dictionary containing enriched data found for this entity.
getRelevanceScore()
Relevance this entity has to the source text. This is a float on a scale of 0 to 1, with 1 being the most relevant. Relevance is determined by the contextual similarity between the entities context and facts in the TextRazor knowledgebase.
getWikiLink()
Link to Wikipedia for this entity, or None if either this entity could not be disambiguated or a Wikipedia link doesn’t exist.

Represents a single “Named Entity” extracted from text.

Each entity is disambiguated to Wikipedia and Freebase concepts wherever possible. Where the entity could not be linked the relevant properties will return None.

Request the "entities" extractor for this object.

Scores

Entities are returned with both Confidence and Relevance scores when possible. These measure slightly different things. The confidence score is a measure of the engine's confidence that the entity is a valid entity given the document context, whereas the relevance score measures how on-topic or important that entity is to the document. As an example, a news story mentioning "Barack Obama" in passing would assign high confidence to the "President" entity. If the story isn't about politics, however, the same entity might have a low relevance score.

Scores can vary if the same entity is mentioned more than once. As an entity is mentioned in different contexts the engine will report different scores.

com.textrazor.annotations.Topic Object

getLabel()
Label for this topic.
getScore()
The relevance of this topic to the processed document. This score ranges from 0 to 1, with 1 representing the highest relevance of the topic to the processed document.
getWikiLink()
Link to Wikipedia for this topic, or None if this topic couldn't be linked to a Wikipedia page.
getWikidataId()
The disambiguated Wikidata QID for this topic, or None if either this topic could not be disambiguated, or a Wikidata link doesn’t exist.

Represents a single “Topic” extracted from text.

Request the "topics" extractor for this object.

ScoredCategory Object

getCategoryId()
The unique ID for this category within its classifier.
getLabel()
The human readable label for this category.
getScore()

The score TextRazor has assigned to this category, between 0 and 1.

To avoid false positives you might want to ignore categories below a certain score - a good starting point would be 0.5. The best way to find an appropriate threshold is to run a sample set of your documents through the system and manually inspect the results.

getClassifierId()
The unique identifier for the classifier that matched this category.

Represents a single “Category” that matches your document.

The classifier ID must be specified in the "classifiers" list with your analysis request.

com.textrazor.annotations.Entailment Object

getContextScore()
Score representing agreement between the source word’s usage in this sentence and the entailed word's usage in our knowledgebase.
getEntailedWords()
Word that is entailed by the source words.
getEntailedTree()
Tree containing the entailed word structure. Note - currently TextRazor only returns a single entailed word, this tree will only contain one leaf.
getWordPositions()
The token positions in the current sentence that generated this entailment.
getMatchedWords()
Links to the Word in the current sentence that generated this entailment.
getPriorScore()
The score of this entailment independent of the context it is used in this sentence.
getScore()
The overall confidence that TextRazor is correct that this is a valid entailment, a combination of the prior and context score.

Represents a single “entailment” derived from the source text.

Please note - If you need the source word for each Entailment you must request the "words" extractor.

Request the "entailments" extractor for this object.

com.textrazor.annotations.RelationParam Object

getWordPositions()
List of the positions of the words in this param within their sentence.
getParamWords()
List of all the Word that make up this param.
getRelation()
Relation of this param to the predicate.
Valid Options: SUBJECT, OBJECT, OTHER

Represents a Param to a specific Relation.

Request the "relations" extractor for this object.

com.textrazor.annotations.NounPhrase Object

getWordPositions()
List of the positions of the words in this phrase within their sentence.
getWords()
List of Word that make up this phrase.

Represents a multi-word phrase extracted from a sentence.

Request the "phrases" extractor for this object.

Word Links

To extract the full text of the noun phrase from the original content you must add the "words" extractor, and use the word offsets to recreate the original string.

com.textrazor.annotations.Property Object

getWordPositions()
List of the positions of the words in the predicate (or focus) of this property.
getPropertyWords()
List of TextRazor words that make up the predicate (or focus) of this property.
getPropertyPositions()
List of word positions that make up the focus of this property.
getPropertyWords()
List of Word that make up the property that targets the focus words.

Represents a property relation extracted from raw text. A property implies an “is-a” or “has-a” relationship between the predicate (or focus) and its property.

Request the "relations" extractor for this object.

com.textrazor.annotations.Relation Object

getParams()
List of the TextRazor RelationParam of this relation.
getWordPositions()
List of the positions of the predicate words in this relation within their sentence.
getPredicateWords()
List of the TextRazor Word in this relation.

Represents a grammatical relation between words. Typically owns a number of RelationParam, representing the SUBJECT and OBJECT of the relation.

Request the "relations" extractor for this object.

Word Links

To extract the full text of the relation predicate or param from the original content you must add the "words" extractor, and use the word offsets to recreate the original string.

com.textrazor.annotations.Word Object

getChildren()
List of TextRazor Word that make up the children of this word. Returns an empty list for leaf words, or if the “dependency-trees” extractor was not requested.
getEntailments()
List of Entailment that this word entails.
getEntities()
List of Entity that this word is a part of.
getEndingPos()
End offset in the input text for this token. Note that this offset applies to the original Unicode string passed in to the api, TextRazor treats multi byte UTF8 charaters as a single position.
getStartingPos()
Start offset in the input text for this token. Note that this offset applies to the original Unicode string passed in to the api, TextRazor treats multi byte UTF8 charaters as a single position.
getLemma()
Morphological root of this word, see http://en.wikipedia.org/wiki/Lemma_(morphology) for details.
getNounPhrases()
List of NounPhrase that this word is a member of.
getParentWord()
Link to the TextRazor word that is parent of this word, or None if this word is either at the root of the sentence or the “dependency-trees” extractor was not requested.
getParentPosition()
Position of the grammatical parent of this word, or None if this word is either at the root of the sentence or the “dependency-trees” extractor was not requested.
getPartOfSpeech()
Part of Speech that applies to this word. English documents use the Penn treebank tagset, as detailed here: https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html All other languages that support POS tagging us the Univeral Dependency POS tagset, detailed here: http://universaldependencies.org/u/pos/
getSenses()
List of {'sense', 'score'} maps representing scores of each Wordnet sense this this word may be a part of. This property requires the "senses" extractor to be sent with your analysis request.
getSpellingSuggestions()
List of {'suggestion', 'score'} maps representing scores of each spelling suggestion that might replace this word. This property requires the "spelling" extractor to be sent with your analysis request.
getPosition()
Position of this word in its sentence.
getPropertyPredicates()
List of Property that this word is a predicate (or focus) member of.
getRelationParams()
List of RelationParam that this word is a member of.
getRelationToParent()
Grammatical relation between this word and it’s parent, or None if this word is either at the root of the sentence or the “dependency-trees” extractor was not requested. TextRazor parses into the Stanford uncollapsed dependencies, as detailed at: http://nlp.stanford.edu/software/dependencies_manual.pdf
getRelations()
List of Relation that this word is a predicate of.
getStem()
Stem of this word.
getToken()
Raw token string that matched this word in the source text.

Represents a single Word (token) extracted by TextRazor.

Request the "words" extractor for this object.

For convenience the Java SDK automatically creates helper functions to retrieve annotations extracted from that sentence.

com.textrazor.annotations.Sentence Object

getWords()
List of all the Word in this sentence.

Represents a single sentence extracted by TextRazor.

com.textrazor.dictionary.DictionaryManager(java.lang.String apiKey)) Object

TextRazor Entity Dictionaries allow you to augment the TextRazor entity extraction system with custom entities that are relevant to your application.

Entity Dictionaries are useful for identifying domain specific entities that may not be common enough for TextRazor to know about out of the box - examples might be Product names, Drug names, and specific person names.

TextRazor supports flexible, high performance matching of dictionaries up to several million entries, limited only by your account plan. Entries are automatically indexed and distributed across our analysis infrastructure to ensure they scale seamlessly with your application.

Once you have created a dictionary, add its ID to your analysis requests with setEntityDictionaries . TextRazor will look for any DictionaryEntry in the dictionary that can be matched to your document, and return it as part of the standard Entity response.

Methods

createDictionary(Dictionary dictionary)

Creates a new dictionary.

See the properties of class Dictionary for valid options.

import com.textrazor.dictionary.DictionaryManager;

DictionaryManager manager = new DictionaryManager(apiKey);

manager.createDictionary(Dictionary.builder().setId('developers').setMatchType('token').build());
allDictionaries()

Returns a list of all Dictionary in your account.

for (Dictionary dict : manager.allDictionaries()) {
	System.out.println('Current dictionary: ' + dict.getId());
}
getDictionary(String id)

Returns a Dictionary object by id.

Dictionary dict = manager.getDictionary('developers');
deleteDictionary(String id)

Deletes a dictionary and all its entries by id.

manager.deleteDictionary('developers');
allEntries(String id, int limit, int offset)

Returns a AllDictionaryEntriesResponse containing all DictionaryEntry for a dictionary, along with paging information.

Larger dictionaries can be too large to download all at once. Where possible it is recommended that you use limit and offset paramaters to control the TextRazor response, rather than filtering client side.

PagedAllEntries allEntries = manager.allEntries('developers');
addEntries(String id, List<DictionaryEntry> entries)

Adds entries to a dictionary.

Entries must be a list corresponding to properties of the new DictionaryEntry objects. At a minimum this would be [{'text':'test text to match'}].

List newEntries = new ArrayList();
List types = Arrays.asList('cpp_developer', 'writer');

newEntries.add(DictionaryEntry.builder().setText('Bjarne Stroustrup').setId('DEV2').addData('types', types).build());

manager.addEntries(newDict.getId(), newEntries);
getEntry(String dictionaryId, String entryId)

Retrieves a specific DictionaryEntry by dictionary id and entry id.

DictionaryEntry entry = manager.getEntry('developers', 'DEV2');
deleteEntry(String dictionaryId, String entryId)

Deletes a specific DictionaryEntry by dictionary id and entry id.

For performance reasons it's always faster to perform major changes to dictionaries by deleting and recreating the whole dictionary rather than removing many individual entries.

manager.deleteEntry('developers', 'DEV2');

Limits

Users on any of our paid plans can create up to 10 dictionaries, with a total of 10,000 entries. TextRazor supports custom dictionaries of millions of entries, please contact us to discuss increasing this limit for your account.

Free account holders are able to create 1 Dictionary with a total of 50 Entries.

com.textrazor.dictionary.model.Dictionary Object

getMatchType()

Controls any pre-processing done on your dictionary before matching.

Defaults to 'token'.

Valid Options:
stem
Words are split and "stemmed" before matching, resulting in a more relaxed match. This is an easy way to match plurals - love, loved, loves will all match the same dictionary entry. This implicitly sets "case_insensitive" to True.
token
Words are split and matched literally.
getCaseInsensitive()

When True, this dictionary will match both uppercase and lowercase characters.

Defaults to 'False'

getId()

The unique identifier for this dictionary.

getLanguage()

When set to a ISO-639-2 language code, this dictionary will only match documents of the corresponding language.

When set to 'any', this dictionary will match any document.

Defaults to 'any'

Represents a single Dictionary, uniqely identified by an id. Each Dictionary owns a set of DictionaryEntry.

Dictionary and DictionaryEntry can only be manipulated through the DictionaryManager object.

com.textrazor.dictionary.model.DictionaryEntry Object

Represents a single dictionary entry, belonging to a Dictionary object.

getId()

Unique ID for this entry, used to identify and manipulate specific entries.

Defaults to an automatically generated unique id.

getText()

String representing the text to match to this DictionaryEntry.

getData()

A dictionary mapping string keys to lists of string data values. Where TextRazor matches this entry to your content in analysis, it will return the dictionary as part of the entity response.

This is useful for adding application-specific metadata to each entry. Dictionary data is limited to a maximum of 10 keys, and a total of 1000 characters across all the mapped values.

{'type':['people', 'person', 'politician']}

com.textrazor.classifier.ClassifierManager(java.lang.String apiKey)) Object

TextRazor can classify your documents according to the IPTC Media Topics, IPTC Newscode or IAB QAG taxonomies using our predefined models.

Sometimes the categories you might be interested in aren't well represented by off-the-shelf classifiers. TextRazor gives you the flexibility to create a customized model for your particular project.

TextRazor uses "concept queries" to define new categories. These are similar to the sort of boolean query that you might type into a search engine, except they query the semantic meaning of the document you are analyzing. Each concept query uses a word or two in English to define your category.

For an example of how to create a custom classifier please see our tutorials. If you aren't getting the results you need, please contact us, we'd be happy to help.

The ClassifierManager interface offers a simple interface for creating and managing your classifiers. Classifiers only need to be uploaded once, they are safetly stored on our servers to use with future analyze requests. Simply add the classifier name to your request's "classifiers" list.

Methods

createClassifier(String classifierId, List<Category> categories)

Creates a new classifier using the provided list of Category.

See the properties of class Category for valid options.

import com.textrazor.classifier.ClassifierManager;

ClassifierManager manager = new ClassifierManager(apiKey);
String classifierId = "my_test_classifier";

manager.createClassifier(testClassifierId, Arrays.asList(Category.builder().setCategoryId("Soccer").setQuery("or(concept('soccer'),concept('association football'))").build()));
deleteClassifier(String classifierId)

Deletes a Classifier and all its Categories by id.

manager.deleteClassifier(testClassifierId);
allCategories(String classifierId, int limit, int offset)

Returns a AllCategoriesResponse containing all Category for a classifier, along with paging information.

Larger classifiers can be too large to download all at once. Where possible it is recommended that you use limit and offset paramaters to control the TextRazor response, rather than filtering client side.

for (Category cat : manager.allCategories(testClassifierId).getCategories()) {
    System.out.println(cat.getCategoryId() + " " + cat.getQuery() + " " + cat.getLabel());
}
deleteCategory(String classifierId, String categoryId)

Deletes a Category object by id.

For performance reasons it's always better to delete and recreate a whole classifier rather than its individual categories one at a time.

manager.deleteCategory(testClassifierId, "Soccer")
getCategory(String classifierId, String categoryId)

Returns a Category object by id.

System.out.println(manager.getCategory(testClassifierId, "Soccer").getLabel());

Limits

Users on any of our paid plans can create up to 10 Classifiers, with a total of 1000 categories. Please contact us to discuss increasing this limit for your account.

Free account holders are able to create 1 Classifier with a total of 50 Categories.

There are no restrictions on the use of classifiers that have been pre-defined by TextRazor.

com.textrazor.classifier.model.Category Object

getCategoryId()
The unique ID for this category within its classifier.
getLabel()
The human readable label for this category. This is an optional field.
getQuery()
The query used to define this category.

Represents a single Category that belongs to a Classifier. Each category consists of a unique ID, and a query at a minimum.

{
    "categoryId" : "100",
    "label" : "Golf",
    "query" : "concept('sport>golf')"
}

com.textrazor.account.AccountManager(java.lang.String apiKey) Object

Allows you to retrieve data about your TextRazor account, designed to help manage and control your usage.

Methods

getAccount()

Returns a complete Account object.

The account endpoint is read only. Calls to this endpoint do not count towards your daily quota.

com.textrazor.account.model.Account Object

getPlan()
The ID of your current subscription plan.
getConcurrentRequestLimit()
The maximum number of requests your account can make simultaneously in parallel.
getConcurrentRequestsUsed()
The number of requests currently being processed by your account in parallel.
getPlanDailyIncludedRequests()
The daily number of requests included with your subscription plan.
getRequestsUsedToday()
The total number of requests that have been made today.