The key distinction between (sequence) classification and token classification lies in their output granularity: (sequence) classification produces a single result for an entire input sequence, whereas token classification yields a result for each individual token within the sequence.
Many classification models support both (sequence) classification and token classification. For further details on (sequence) classification, please refer to [this page](classify.md).
<sup>C</sup> Automatically converted into a classification model via `--convert classify`. ([details](./README.md#model-conversion))
\* Feature support is the same as that of the original model.
If your model is not in the above list, we will try to automatically convert the model using
[as_seq_cls_model][vllm.model_executor.models.adapters.as_seq_cls_model]. By default, the class probabilities are extracted from the softmaxed hidden state corresponding to the last token.
### As Reward Models
Using token classification models as reward models. For details on reward models, see [Reward Models](reward.md).
(output,) = llm.encode("Hello, my name is", pooling_task="token_classify")
data = output.outputs.data
print(f"Data: {data!r}")
```
## Online Serving
Please refer to the [pooling API](README.md#pooling-api) and use `"task":"token_classify"`.
## More examples
More examples can be found here: [examples/pooling/token_classify](../../../examples/pooling/token_classify)
## Supported Features
Token classification features should be consistent with (sequence) classification. For more information, see [this page](classify.md#supported-features).