← Back to Top 10
Microsoft

Phi-4

Small but mighty. Exceptional reasoning and coding in a compact 14B package.

Generic Info

  • Publisher: Microsoft
  • Release Date: Late 2024
  • Parameters: 14B
  • Context Window: 128k tokens
  • License: MIT License
  • Key Capabilities: Reasoning, Math, Coding, Safety

Phi-4 continues Microsoft's legacy of "textbook quality" data training. Despite its smaller size, it competes with much larger models on reasoning benchmarks, making it a breakthrough for efficient AI.

Hello World Guide

Run Phi-4 with transformers.

Python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "microsoft/phi-4"

model = AutoModelForCausalLM.from_pretrained(
    model_id, 
    torch_dtype="auto", 
    trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)

inputs = tokenizer("Write a Python function to check for prime numbers.", return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)

print(tokenizer.batch_decode(outputs)[0])

Industry Usage

Edge Computing

Ideal for deployment on edge devices and local servers where compute resources are limited.

Mobile Apps

Can be quantized to run efficiently on high-end smartphones for offline reasoning tasks.

Synthetic Data

Used to generate high-quality synthetic training data for other models due to its strong reasoning.