MyBatis高级特性与性能优化:从入门到精通的实战指南
🚀 MyBatis高级特性与性能优化:从入门到精通的实战指南
作为一名资深的后端开发工程师,我深知MyBatis在企业级应用中的重要地位。今天,我将带你深入探索MyBatis的高级特性,并分享一些实战中的性能优化技巧。准备好了吗?让我们开始这场技术之旅!
文章目录
- 🚀 MyBatis高级特性与性能优化:从入门到精通的实战指南
- ⚡ 缓存机制深度剖析:让你的应用飞起来
- 一级缓存(SqlSession级别):默认开启的性能利器
- 一级缓存的工作原理深度解析
- 一级缓存失效场景详解
- 二级缓存(Mapper级别):跨SqlSession的数据共享
- 二级缓存配置参数详解
- 二级缓存使用注意事项
- 自定义缓存实现:打造专属的缓存策略
- 基于LRU算法的自定义缓存
- 缓存失效策略:智能管理缓存生命周期
- 缓存失效策略实现原理
- Redis集成缓存方案:分布式缓存的最佳实践
- Redis缓存配置优化
- 缓存预热策略
- 🎯 性能优化实战:让你的应用性能翻倍
- SQL执行分析与调优:找出性能瓶颈
- 自定义SQL执行时间监控
- SQL性能分析工具集成
- 批量操作优化:告别单条SQL的低效
- 批量操作性能对比测试
- 分页查询最佳实践:高效处理大数据量
- 分页性能优化策略
- 连接池配置优化:合理配置数据库连接
- 连接池监控和调优
- 懒加载策略调优:按需加载关联数据
- 懒加载性能优化技巧
- 条件懒加载
- 🔧 高级性能优化技巧
- 结果集处理优化
- 动态SQL优化
- 类型处理器优化
- 🔧 实战总结:性能优化的黄金法则
- 1. 缓存策略金字塔
- 💡 写在最后
⚡ 缓存机制深度剖析:让你的应用飞起来
一级缓存(SqlSession级别):默认开启的性能利器
一级缓存是MyBatis默认开启的缓存机制,它存在于SqlSession的生命周期内。当你在同一个SqlSession中执行相同的查询时,MyBatis会直接从缓存中返回结果,而不是再次访问数据库。
// 一级缓存示例
SqlSession sqlSession = sqlSessionFactory.openSession();
UserMapper userMapper = sqlSession.getMapper(UserMapper.class);// 第一次查询,会访问数据库
User user1 = userMapper.selectById(1);
// 第二次查询,直接从一级缓存获取
User user2 = userMapper.selectById(1);// user1 == user2 返回true,说明是同一个对象
sqlSession.close();
💡 Pro Tips:
- 一级缓存在增删改操作后会自动清空
- 可以通过
sqlSession.clearCache()
手动清空缓存 - 在分布式环境下要特别注意一级缓存的数据一致性问题
一级缓存的工作原理深度解析
一级缓存的实现基于HashMap,存储在BaseExecutor中:
// MyBatis源码中的一级缓存实现
public abstract class BaseExecutor implements Executor {protected PerpetualCache localCache;@Overridepublic <E> List<E> query(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler) {BoundSql boundSql = ms.getBoundSql(parameter);CacheKey key = createCacheKey(ms, parameter, rowBounds, boundSql);return query(ms, parameter, rowBounds, resultHandler, key, boundSql);}
}
一级缓存失效场景详解
@Test
public void testFirstLevelCacheInvalidation() {SqlSession sqlSession = sqlSessionFactory.openSession();UserMapper userMapper = sqlSession.getMapper(UserMapper.class);// 第一次查询User user1 = userMapper.selectById(1);// 执行更新操作,一级缓存会被清空userMapper.updateUser(new User(2, "张三", "zhangsan@example.com"));// 再次查询,会重新访问数据库User user2 = userMapper.selectById(1);// user1 != user2,因为缓存已被清空assertNotSame(user1, user2);
}
二级缓存(Mapper级别):跨SqlSession的数据共享
二级缓存是Mapper级别的缓存,可以在多个SqlSession之间共享数据。要启用二级缓存,需要进行以下配置:
<!-- 在mybatis-config.xml中开启二级缓存 -->
<settings><setting name="cacheEnabled" value="true"/>
</settings><!-- 在Mapper.xml中配置缓存 -->
<cache eviction="LRU" flushInterval="60000" size="512" readOnly="true"/>
// 实体类需要实现Serializable接口
public class User implements Serializable {private static final long serialVersionUID = 1L;// ... 属性和方法
}
二级缓存配置参数详解
<!-- 详细的二级缓存配置 -->
<cache eviction="LRU" <!-- 缓存回收策略:LRU、FIFO、SOFT、WEAK -->flushInterval="60000" <!-- 缓存刷新间隔,单位毫秒 -->size="512" <!-- 缓存对象数量 -->readOnly="false" <!-- 是否只读 -->blocking="true" <!-- 是否阻塞 -->type="org.mybatis.caches.ehcache.EhcacheCache"/> <!-- 自定义缓存实现 -->
二级缓存使用注意事项
// 正确使用二级缓存的示例
@Test
public void testSecondLevelCache() {// 第一个SqlSessionSqlSession sqlSession1 = sqlSessionFactory.openSession();UserMapper userMapper1 = sqlSession1.getMapper(UserMapper.class);User user1 = userMapper1.selectById(1);sqlSession1.commit(); // 必须提交才能将数据放入二级缓存sqlSession1.close();// 第二个SqlSessionSqlSession sqlSession2 = sqlSessionFactory.openSession();UserMapper userMapper2 = sqlSession2.getMapper(UserMapper.class);User user2 = userMapper2.selectById(1); // 从二级缓存获取sqlSession2.close();// 数据相同但对象不同(如果readOnly=false)assertEquals(user1.getName(), user2.getName());
}
自定义缓存实现:打造专属的缓存策略
当内置缓存无法满足需求时,我们可以实现自定义缓存:
public class CustomCache implements Cache {private final String id;private final Map<Object, Object> cache = new ConcurrentHashMap<>();private final ReadWriteLock readWriteLock = new ReentrantReadWriteLock();public CustomCache(String id) {this.id = id;}@Overridepublic String getId() {return id;}@Overridepublic void putObject(Object key, Object value) {readWriteLock.writeLock().lock();try {cache.put(key, value);// 添加日志记录log.debug("Cache PUT: key={}, value={}", key, value);} finally {readWriteLock.writeLock().unlock();}}@Overridepublic Object getObject(Object key) {readWriteLock.readLock().lock();try {Object value = cache.get(key);log.debug("Cache GET: key={}, hit={}", key, value != null);return value;} finally {readWriteLock.readLock().unlock();}}@Overridepublic Object removeObject(Object key) {readWriteLock.writeLock().lock();try {return cache.remove(key);} finally {readWriteLock.writeLock().unlock();}}@Overridepublic void clear() {readWriteLock.writeLock().lock();try {cache.clear();log.info("Cache cleared for: {}", id);} finally {readWriteLock.writeLock().unlock();}}@Overridepublic int getSize() {return cache.size();}@Overridepublic ReadWriteLock getReadWriteLock() {return readWriteLock;}
}
基于LRU算法的自定义缓存
public class LRUCache implements Cache {private final String id;private final LinkedHashMap<Object, Object> cache;private final int maxSize;public LRUCache(String id) {this.id = id;this.maxSize = 1000; // 默认最大容量this.cache = new LinkedHashMap<Object, Object>(16, 0.75f, true) {@Overrideprotected boolean removeEldestEntry(Map.Entry<Object, Object> eldest) {return size() > maxSize;}};}// ... 实现Cache接口的其他方法
}
缓存失效策略:智能管理缓存生命周期
MyBatis提供了多种缓存失效策略:
- LRU(Least Recently Used):移除最长时间不被使用的对象
- FIFO(First In First Out):按对象进入缓存的顺序来移除
- SOFT:基于垃圾回收器状态和软引用规则移除对象
- WEAK:基于垃圾回收器状态和弱引用规则移除对象
缓存失效策略实现原理
// LRU策略实现示例
public class LruEvictionPolicy implements EvictionPolicy {private final Map<Object, Object> keyMap;private final LinkedList<Object> keyList;@Overridepublic void recordAccess(Object key) {// 将访问的key移到链表头部keyList.remove(key);keyList.addFirst(key);}@Overridepublic Object pollLastEntry() {// 移除链表尾部的元素(最久未使用)return keyList.removeLast();}
}
Redis集成缓存方案:分布式缓存的最佳实践
在分布式环境中,我们通常选择Redis作为二级缓存的实现:
@Component
public class RedisCache implements Cache {private final RedisTemplate<String, Object> redisTemplate;private final String id;private final ReadWriteLock readWriteLock = new ReentrantReadWriteLock();public RedisCache(String id) {this.id = id;this.redisTemplate = SpringContextHolder.getBean(RedisTemplate.class);}@Overridepublic void putObject(Object key, Object value) {try {String redisKey = getKey(key);redisTemplate.opsForValue().set(redisKey, value, 30, TimeUnit.MINUTES);// 设置过期时间,避免内存泄漏log.debug("Redis cache put: {}", redisKey);} catch (Exception e) {log.error("Redis cache put error", e);}}@Overridepublic Object getObject(Object key) {try {String redisKey = getKey(key);Object value = redisTemplate.opsForValue().get(redisKey);log.debug("Redis cache get: {}, hit: {}", redisKey, value != null);return value;} catch (Exception e) {log.error("Redis cache get error", e);return null;}}@Overridepublic Object removeObject(Object key) {try {String redisKey = getKey(key);Object value = redisTemplate.opsForValue().get(redisKey);redisTemplate.delete(redisKey);return value;} catch (Exception e) {log.error("Redis cache remove error", e);return null;}}@Overridepublic void clear() {try {Set<String> keys = redisTemplate.keys(id + ":*");if (keys != null && !keys.isEmpty()) {redisTemplate.delete(keys);}log.info("Redis cache cleared for: {}", id);} catch (Exception e) {log.error("Redis cache clear error", e);}}private String getKey(Object key) {return id + ":" + DigestUtils.md5Hex(key.toString());}@Overridepublic ReadWriteLock getReadWriteLock() {return readWriteLock;}
}
Redis缓存配置优化
# application.yml中的Redis配置
spring:redis:host: localhostport: 6379database: 0timeout: 2000mslettuce:pool:max-active: 20max-idle: 10min-idle: 5max-wait: 2000ms# 缓存配置cache:type: redisredis:time-to-live: 1800000 # 30分钟cache-null-values: falsekey-prefix: "mybatis:cache:"
缓存预热策略
@Component
public class CacheWarmupService {@Autowiredprivate UserMapper userMapper;@PostConstructpublic void warmupCache() {log.info("开始缓存预热...");// 预热热点数据List<Long> hotUserIds = getHotUserIds();for (Long userId : hotUserIds) {userMapper.selectById(userId);}log.info("缓存预热完成,预热数据量: {}", hotUserIds.size());}private List<Long> getHotUserIds() {// 从统计数据或配置中获取热点用户IDreturn Arrays.asList(1L, 2L, 3L, 4L, 5L);}
}
🎯 性能优化实战:让你的应用性能翻倍
SQL执行分析与调优:找出性能瓶颈
使用MyBatis的SQL执行分析功能,可以帮助我们定位性能问题:
<!-- 开启SQL执行日志 -->
<settings><setting name="logImpl" value="STDOUT_LOGGING"/><!-- 或者使用SLF4J --><!-- <setting name="logImpl" value="SLF4J"/> -->
</settings>
// 使用@Select注解时的性能监控
@Select("SELECT * FROM user WHERE age > #{age}")
@Options(useCache = true, flushCache = Options.FlushCachePolicy.FALSE)
List<User> selectUsersByAge(@Param("age") int age);
自定义SQL执行时间监控
@Intercepts({@Signature(type = Executor.class, method = "query", args = {MappedStatement.class, Object.class, RowBounds.class, ResultHandler.class})
})
public class SqlExecutionTimeInterceptor implements Interceptor {private static final Logger log = LoggerFactory.getLogger(SqlExecutionTimeInterceptor.class);@Overridepublic Object intercept(Invocation invocation) throws Throwable {long startTime = System.currentTimeMillis();try {Object result = invocation.proceed();long endTime = System.currentTimeMillis();long executionTime = endTime - startTime;// 记录执行时间超过阈值的SQLif (executionTime > 1000) { // 1秒阈值MappedStatement ms = (MappedStatement) invocation.getArgs()[0];log.warn("慢SQL检测 - 执行时间: {}ms, SQL ID: {}", executionTime, ms.getId());}return result;} catch (Exception e) {log.error("SQL执行异常", e);throw e;}}@Overridepublic Object plugin(Object target) {return Plugin.wrap(target, this);}
}
SQL性能分析工具集成
@Configuration
public class MyBatisConfig {@Beanpublic SqlSessionFactory sqlSessionFactory(DataSource dataSource) throws Exception {SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean();sessionFactory.setDataSource(dataSource);// 添加性能监控插件sessionFactory.setPlugins(new Interceptor[]{new SqlExecutionTimeInterceptor(),new PageInterceptor() // 分页插件});return sessionFactory.getObject();}
}
批量操作优化:告别单条SQL的低效
批量操作是提升性能的重要手段:
<!-- 批量插入优化 -->
<insert id="batchInsert" parameterType="list">INSERT INTO user (name, email, age) VALUES<foreach collection="list" item="user" separator=",">(#{user.name}, #{user.email}, #{user.age})</foreach>
</insert><!-- 批量更新优化 -->
<update id="batchUpdate" parameterType="list"><foreach collection="list" item="user" separator=";">UPDATE user SET name = #{user.name}, email = #{user.email} WHERE id = #{user.id}</foreach>
</update><!-- 使用CASE WHEN进行批量更新 -->
<update id="batchUpdateWithCase" parameterType="list">UPDATE user SET name = CASE id<foreach collection="list" item="user">WHEN #{user.id} THEN #{user.name}</foreach>END,email = CASE id<foreach collection="list" item="user">WHEN #{user.id} THEN #{user.email}</foreach>ENDWHERE id IN<foreach collection="list" item="user" open="(" separator="," close=")">#{user.id}</foreach>
</update>
// Java代码中的批量操作
@Service
public class UserService {@Autowiredprivate UserMapper userMapper;@Transactionalpublic void batchSaveUsers(List<User> users) {if (users == null || users.isEmpty()) {return;}// 分批处理,避免内存溢出和SQL过长int batchSize = 1000;for (int i = 0; i < users.size(); i += batchSize) {int end = Math.min(i + batchSize, users.size());List<User> batch = users.subList(i, end);userMapper.batchInsert(batch);// 每批次后清理一级缓存,避免内存溢出if (i % (batchSize * 10) == 0) {sqlSession.clearCache();}}}@Transactionalpublic void batchUpdateUsers(List<User> users) {if (users == null || users.isEmpty()) {return;}// 使用CASE WHEN方式批量更新,性能更好int batchSize = 500; // 更新操作建议更小的批次for (int i = 0; i < users.size(); i += batchSize) {int end = Math.min(i + batchSize, users.size());List<User> batch = users.subList(i, end);userMapper.batchUpdateWithCase(batch);}}
}
批量操作性能对比测试
@Test
public void testBatchPerformance() {List<User> users = generateTestUsers(10000);// 单条插入测试long startTime = System.currentTimeMillis();for (User user : users) {userMapper.insert(user);}long singleInsertTime = System.currentTimeMillis() - startTime;// 批量插入测试startTime = System.currentTimeMillis();userMapper.batchInsert(users);long batchInsertTime = System.currentTimeMillis() - startTime;log.info("单条插入耗时: {}ms", singleInsertTime);log.info("批量插入耗时: {}ms", batchInsertTime);log.info("性能提升: {}倍", (double) singleInsertTime / batchInsertTime);
}
分页查询最佳实践:高效处理大数据量
使用PageHelper插件实现高效分页:
// 使用PageHelper进行分页
@Service
public class UserService {public PageInfo<User> getUsersByPage(int pageNum, int pageSize) {// 设置分页参数PageHelper.startPage(pageNum, pageSize);// 执行查询List<User> users = userMapper.selectAllUsers();// 封装分页信息return new PageInfo<>(users);}// 自定义分页查询,避免count查询public List<User> getUsersWithLimit(int offset, int limit) {return userMapper.selectUsersWithLimit(offset, limit);}// 游标分页,适用于大数据量场景public List<User> getUsersByCursor(Long lastId, int limit) {return userMapper.selectUsersByCursor(lastId, limit);}
}
<!-- 游标分页查询 -->
<select id="selectUsersByCursor" resultType="User">SELECT * FROM user <where><if test="lastId != null">id > #{lastId}</if></where>ORDER BY id ASCLIMIT #{limit}
</select><!-- 优化的分页查询,使用覆盖索引 -->
<select id="selectUsersOptimized" resultType="User">SELECT u.* FROM user uINNER JOIN (SELECT id FROM user ORDER BY id LIMIT #{offset}, #{limit}) t ON u.id = t.id
</select>
分页性能优化策略
@Service
public class OptimizedPagingService {// 深分页优化:使用子查询public PageInfo<User> getDeepPageUsers(int pageNum, int pageSize) {if (pageNum > 100) { // 深分页阈值// 使用优化的查询方式int offset = (pageNum - 1) * pageSize;List<User> users = userMapper.selectUsersOptimized(offset, pageSize);// 手动构建PageInfo,避免count查询PageInfo<User> pageInfo = new PageInfo<>();pageInfo.setList(users);pageInfo.setPageNum(pageNum);pageInfo.setPageSize(pageSize);pageInfo.setHasNextPage(users.size() == pageSize);return pageInfo;} else {// 正常分页PageHelper.startPage(pageNum, pageSize);List<User> users = userMapper.selectAllUsers();return new PageInfo<>(users);}}// 缓存总数,避免重复count查询@Cacheable(value = "userCount", key = "'total'")public long getTotalUserCount() {return userMapper.countUsers();}
}
连接池配置优化:合理配置数据库连接
# application.yml中的连接池配置
spring:datasource:type: com.zaxxer.hikari.HikariDataSourcehikari:# 核心配置minimum-idle: 10 # 最小空闲连接数maximum-pool-size: 20 # 最大连接池大小idle-timeout: 300000 # 空闲连接超时时间(5分钟)max-lifetime: 1800000 # 连接最大生命周期(30分钟)connection-timeout: 30000 # 连接超时时间(30秒)# 性能优化配置validation-timeout: 5000 # 验证连接有效性超时时间leak-detection-threshold: 60000 # 连接泄漏检测阈值(1分钟)initialization-fail-timeout: 1 # 初始化失败超时时间# 连接测试配置connection-test-query: SELECT 1 # 连接测试查询# 其他配置pool-name: "HikariCP-MyBatis" # 连接池名称auto-commit: true # 自动提交read-only: false # 只读模式# 数据库特定配置data-source-properties:cachePrepStmts: true # 缓存预编译语句prepStmtCacheSize: 250 # 预编译语句缓存大小prepStmtCacheSqlLimit: 2048 # 预编译语句SQL长度限制useServerPrepStmts: true # 使用服务器端预编译useLocalSessionState: true # 使用本地会话状态rewriteBatchedStatements: true # 重写批量语句cacheResultSetMetadata: true # 缓存结果集元数据cacheServerConfiguration: true # 缓存服务器配置elideSetAutoCommits: true # 省略自动提交设置maintainTimeStats: false # 维护时间统计
连接池监控和调优
@Component
public class HikariPoolMonitor {@Autowiredprivate HikariDataSource dataSource;@Scheduled(fixedRate = 60000) // 每分钟监控一次public void monitorConnectionPool() {HikariPoolMXBean poolMXBean = dataSource.getHikariPoolMXBean();log.info("连接池监控 - 活跃连接: {}, 空闲连接: {}, 等待连接: {}, 总连接: {}",poolMXBean.getActiveConnections(),poolMXBean.getIdleConnections(),poolMXBean.getThreadsAwaitingConnection(),poolMXBean.getTotalConnections());// 连接池使用率告警int activeConnections = poolMXBean.getActiveConnections();int maxPoolSize = dataSource.getMaximumPoolSize();double usageRate = (double) activeConnections / maxPoolSize;if (usageRate > 0.8) {log.warn("连接池使用率过高: {:.2f}%, 建议增加连接池大小", usageRate * 100);}}
}
懒加载策略调优:按需加载关联数据
合理使用懒加载可以显著提升性能:
<!-- 开启懒加载 -->
<settings><setting name="lazyLoadingEnabled" value="true"/><setting name="aggressiveLazyLoading" value="false"/><!-- 懒加载触发方法 --><setting name="lazyLoadTriggerMethods" value="equals,clone,hashCode,toString"/>
</settings><!-- 配置关联查询的懒加载 -->
<resultMap id="UserResultMap" type="User"><id property="id" column="id"/><result property="name" column="name"/><result property="email" column="email"/><!-- 一对多关联,懒加载 --><collection property="orders" ofType="Order" select="selectOrdersByUserId" column="id" fetchType="lazy"/><!-- 一对一关联,懒加载 --><association property="profile" javaType="UserProfile"select="selectProfileByUserId"column="id"fetchType="lazy"/>
</resultMap><!-- 关联查询SQL -->
<select id="selectOrdersByUserId" resultType="Order">SELECT * FROM orders WHERE user_id = #{userId}
</select><select id="selectProfileByUserId" resultType="UserProfile">SELECT * FROM user_profile WHERE user_id = #{userId}
</select>
// 在需要时才加载关联数据
@Test
public void testLazyLoading() {User user = userMapper.selectById(1);// 此时orders和profile还未加载log.info("用户信息: {}", user.getName());// 访问orders时触发懒加载List<Order> orders = user.getOrders(); // 触发懒加载log.info("订单数量: {}", orders.size());// 访问profile时触发懒加载UserProfile profile = user.getProfile(); // 触发懒加载log.info("用户档案: {}", profile.getBio());
}
懒加载性能优化技巧
// 批量懒加载优化
@Service
public class UserService {// 避免N+1问题的懒加载public List<User> getUsersWithOrdersOptimized(List<Long> userIds) {// 先查询用户信息List<User> users = userMapper.selectByIds(userIds);// 批量查询订单信息List<Order> allOrders = orderMapper.selectByUserIds(userIds);// 手动组装关联关系Map<Long, List<Order>> orderMap = allOrders.stream().collect(Collectors.groupingBy(Order::getUserId));users.forEach(user -> {List<Order> userOrders = orderMap.getOrDefault(user.getId(), new ArrayList<>());user.setOrders(userOrders);});return users;}
}
条件懒加载
<!-- 根据条件决定是否懒加载 -->
<resultMap id="UserResultMapConditional" type="User"><id property="id" column="id"/><result property="name" column="name"/><!-- 只有VIP用户才加载详细档案 --><association property="profile" javaType="UserProfile"select="selectProfileByUserId"column="{userId=id,userType=user_type}"fetchType="lazy"><discriminator javaType="string" column="user_type"><case value="VIP" resultType="VipUserProfile"/><case value="NORMAL" resultType="NormalUserProfile"/></discriminator></association>
</resultMap>
🔧 高级性能优化技巧
结果集处理优化
// 使用ResultHandler处理大结果集
@Mapper
public interface UserMapper {void selectAllUsersWithHandler(ResultHandler<User> handler);
}
<select id="selectAllUsersWithHandler" resultType="User">SELECT * FROM user
</select>
// 流式处理大数据量
@Service
public class UserExportService {public void exportUsers(OutputStream outputStream) {try (PrintWriter writer = new PrintWriter(outputStream)) {writer.println("ID,Name,Email,Age"); // CSV头部userMapper.selectAllUsersWithHandler(resultContext -> {User user = resultContext.getResultObject();writer.printf("%d,%s,%s,%d%n", user.getId(), user.getName(), user.getEmail(), user.getAge());// 每1000条刷新一次if (resultContext.getResultCount() % 1000 == 0) {writer.flush();}});}}
}
动态SQL优化
<!-- 优化前:可能产生性能问题的动态SQL -->
<select id="selectUsersBadExample" resultType="User">SELECT * FROM user<where><if test="name != null and name != ''">AND name LIKE CONCAT('%', #{name}, '%')</if><if test="email != null and email != ''">AND email = #{email}</if><if test="ageRange != null">AND age BETWEEN #{ageRange.min} AND #{ageRange.max}</if></where>ORDER BY create_time DESC
</select><!-- 优化后:考虑索引和查询效率的动态SQL -->
<select id="selectUsersOptimized" resultType="User">SELECT * FROM user<where><!-- 精确匹配优先,利用索引 --><if test="email != null and email != ''">AND email = #{email}</if><if test="ageRange != null">AND age BETWEEN #{ageRange.min} AND #{ageRange.max}</if><!-- 模糊查询放在最后 --><if test="name != null and name != ''"><choose><when test="name.length() >= 3">AND name LIKE CONCAT(#{name}, '%') -- 前缀匹配,可以利用索引</when><otherwise>AND name = #{name} -- 短字符串使用精确匹配</otherwise></choose></if></where>ORDER BY <choose><when test="orderBy != null and orderBy == 'name'">name ASC</when><when test="orderBy != null and orderBy == 'age'">age DESC</when><otherwise>create_time DESC</otherwise></choose><if test="limit != null and limit > 0">LIMIT #{limit}</if>
</select>
类型处理器优化
// 自定义类型处理器,优化JSON字段处理
@MappedTypes(UserPreferences.class)
@MappedJdbcTypes(JdbcType.VARCHAR)
public class UserPreferencesTypeHandler extends BaseTypeHandler<UserPreferences> {private static final ObjectMapper objectMapper = new ObjectMapper();@Overridepublic void setNonNullParameter(PreparedStatement ps, int i, UserPreferences parameter, JdbcType jdbcType) throws SQLException {try {ps.setString(i, objectMapper.writeValueAsString(parameter));} catch (JsonProcessingException e) {throw new SQLException("Error converting UserPreferences to JSON", e);}}@Overridepublic UserPreferences getNullableResult(ResultSet rs, String columnName) throws SQLException {String json = rs.getString(columnName);return parseJson(json);}@Overridepublic UserPreferences getNullableResult(ResultSet rs, int columnIndex) throws SQLException {String json = rs.getString(columnIndex);return parseJson(json);}@Overridepublic UserPreferences getNullableResult(CallableStatement cs, int columnIndex) throws SQLException {String json = cs.getString(columnIndex);return parseJson(json);}private UserPreferences parseJson(String json) {if (json == null || json.trim().isEmpty()) {return null;}try {return objectMapper.readValue(json, UserPreferences.class);} catch (JsonProcessingException e) {log.warn("Error parsing JSON to UserPreferences: {}", json, e);return null;}}
}
🔧 实战总结:性能优化的黄金法则
1. 缓存策略金字塔
- 缓存优先:合理使用一级和二级缓存,在分布式环境下选择Redis
- 批量操作:避免循环执行单条SQL,使用批量操作提升效率
- 分页优化:大数据量查询必须分页,避免一次性加载过多数据
- 连接池调优:根据业务特点合理配置连接池参数
- 懒加载策略:按需加载关联数据,避免不必要的数据传输
💡 写在最后
MyBatis的高级特性和性能优化是一个持续学习和实践的过程。在实际项目中,我们需要根据具体的业务场景和数据特点,选择合适的优化策略。记住,没有银弹,只有最适合的解决方案。
希望这篇文章能够帮助你在MyBatis的道路上走得更远!如果你有任何问题或想法,欢迎在评论区与我交流。
关注我,获取更多后端技术干货!让我们一起在技术的道路上不断前行! 🚀